The computer, like humans, can read and write. In computer programming and specifically in object oriented programming languages, computer programs employ methods that operate on data as attributes, assembling objects that interact with each other as the building blocks or atomic elements of a given program.
Objects have attributes and methods. Very often, object method operations are capable of reading and writing object attributes. As computers become more advanced, meta-ification, the capitalization of logical operations on self-referential data, has emerged as a powerful force to accelerate the performance of computer algorithms. While the meta-algorithmic paradigm is an exciting one, it is unlikely that computers, and by extension algorithms, will one day read and write themselves completely independently. Humans will always be needed to read and write algorithms, but algorithms can read and write other algorithms and this is an introduction to meta-data, meta-inquiry, meta-hypotheses and meta-learning.
There is an important distinction between human intelligence and AI. AI systems are a collection of methods that operate on input data as attributes comprising the system’s internal state. Human intelligence is an assembly of attributes and methods (read: neurons, DNA and biology) that operate on external and internal data rendering a symbiotic mosaic of states governed by a biological state of action. For the computer, meta-instantiation primarily indicates the presence of meta-instances housing meta-data describing the empirical and theoretical state-space of base objects and base datum. For a human, meta-instantiation indicates a state of action, creating and defining a unique meta-object/meta-instance that is capable of initializing and accessing a memory of base objects belonging to a defined category as a collection of meta-data; indicative of more of a state of action. A computer programmer may dictate a state of action over the state(s) of a computer program, particularly through the definition, creation and instantiation of meta-instantiations.
In this way, it seems that in the meta-computing paradigm, meta-algorithms are like object oriented attributes/data/state(s), and meta-optimizations are like object oriented methods/procedures/state(s) of action which are capable of altering the attributed meta-algorithms and is importantly catalyzed by humans as the responsible practitioners of self-awareness and self-reference.
As such, let us embark on a journey of meta-inquiry to explore the universe of self-awareness and self-reference.
Returning to the title of this week’s volume seems fitting:
Meta-Inquiry. Define Meta?
Meta: Showing explicit awareness of oneself as a member of its category, self-referential.
The concept of self-reference as the basis of meta-instantiation is an important one. The human capacity for self-consciousness is a powerful one unlike many others observed in nature. Consider that the human brain is a rare example of consciousness that studies itself, true self-consciousness indeed. Thus, on the topic of the theory of meta-inquiry it seems the human brain may be a valuable source of information.
Concretely, meta-inquiry can be defined as inquiry about inquiry. Some examples are given below:
- Which lines of inquiry are self-sustaining?
- Which lines of inquiry have a latent ordering?
As the questions above have lines of inquiry as their very subjects, it is understood that meta-inquiry requires meta-data. Any sufficient answer to these questions should have an associated collection or memory of base-data identifying which lines of inquiry are valid or invalid according to the meta-inquiry.
Meta-instantiation is a powerful concept. Meta-instantiation can refer to the actual process of allocating and saving meta-data in a computer memory, or it can be used to simply identify a novel meta-object which governs base-objects in order to introduce a certain level of self-referential functionality.
If I name a meta-instance, what is the definition of this meta-instance as implicated by the definition of the base instance?
Meta-instantiation is a compelling exploration of meaning. Any meta-instantiation should house and communicate meta-data. This self-referential recording of information provides much needed support for inquiry into meaning. Self-reference introduces self-consciousness to functionality. Whether base subject identities are completely defined or not, meta-instantiations of subject implicate a certain kind of self-awareness. In computing, this awareness may be limited to meta-instances awareness of base instantiations.
More explicitly, many meta-instances have meta-meta-instantiations, however while a human may be aware of meta-instances and higher order meta-meta-instantiations, meta-instances instantiated in the computer’s memory are only functionally aware of lower order base instantiations.
Meta-data is a key element of meta-instantiation. Meta-data informs us of a base data source but contains no data from the base data. It is data about data. Ex: The titles (and intended meaning thereof) of columns and rows. The number of entries (total). The number of entries (by distinct grouping(s)).
If the base data was presented without meta-data, it would fail to communicate anything about itself.
For example, if I gave you two copies of the same base data in different column or row orderings, can we determine the difference between the two orderings aside from the fact that it is a permutation without the meta-data? It is not likely. Even if it is possible, such as the case of a dictionary where one column is a list of words and another column is a list of their definitions, depending on the size of the dataset, it should still take a lot longer to process what the base-data communicates without the presence of meta-data.
The data becomes self-conscious, insofar as meta-data, when paired with base data, enables a dataset to communicate about itself.
In the functionally self-referential way that meta-data communicates about itself, meta-inquiry questions itself.
Novel information can always enter from externally joined parties. For example, our base-data is joined with new data yielding an updated meta-data rundown that describes our new base-data source and changes to the base-dataset, or our most recent inquiry is answered yielding another question whose answer will require future results that build on current knowledge.
Thus, meta-data and meta-inquiry are not solely self informed, but rely on correctly instantiated memory of subjected base-data for communication about themselves.
Understanding the inclination to explore meta-data, meta-inquiry and meta-learning, it may be wise to investigate a theory of meta-hypotheses which provide foundational assumptions and principles to guide the design of meta-instantiations capable of meta-learning.
With this in mind, I would like to pose a question: How are questions and hypotheses linked and what does this say about meta-hypotheses under the shadow of meta-learning?
I believe that at the heart of every hypothesis studied under the scientific method lies a question: is the (null) hypothesis true?
Meta-hypotheses are understood as hypotheses about hypotheses. Importantly, these meta-hypotheses should, by definition, incorporate meta-data about subjected base-hypotheses in order to begin inquiry into valuation of subjected base-hypotheses as valid or not.
I believe meta-hypotheses are an important guide to meta-learning, aiding computers in learning to learn. Meta-data about subjected base policies and their inductive bias form the basis of meta-hypotheses which seek to quantify meta-learner confidence in base-policies. Successful meta-hypotheses catalog and systematize subjected base policies and navigate the inductive bias topology of this catalog to yield high confidence and high performance policies.
It is worth mentioning that a great example of meta-inquiry is the design of meta-objects in object oriented programming. As objects belonging to a given class are instantiated in computer memory, meta-objects operate on base objects with knowledge about how to iteratively alter base object procedures runtime behavior. Meta-object instantiation initializes information on subjected base object methods and attributes. The given meta-object class, by definition, owns methods to alter subjected base object methods/procedures as well as base object data/attributes. Base object instantiations initialize data on base object attributes with the given base object class, by definition, owning methods to alter only the very same base object instantiation attributes. Thus, meta-object instantiation operations on base objects require logical reasoning about meta-data, implicating base object operation definitions to purposefully alter base object methods and attributes in a way that base objects alone cannot without the logical capacity to access, communicate and reason about meta-data concerning the identified subjected base objects themselves.
A sufficiently interested and performance oriented computer programmer may ask:
Which meta-objects are self-sufficient enough to exist in running code without human intervention?
I believe this question is strongly linked to the meta-inquiry: Which lines of inquiry are self-sustaining enough to complete themselves?
Importantly, computerized meta-instantiation can require effective stopping conditions to prevent these programs from becoming runaway trains, this is why the concession about lines of inquiry that complete themselves is notable.
It is seemingly possible that self-sustaining lines of inquiry about base-object data/attributes should yield valuable hypotheses concerning the design of self-sufficient meta-object instances capable of altering base-object methods and attributes without human oversight.
Understanding that meta-learning procedures have associated meta-hypotheses which are supported by foundational assumptions that contribute to the inductive bias topology cataloged by meta-learning procedures can be valuable for realizing effective stopping conditions for meta-learning procedures which hope to execute computing tasks without human intervention. Contrarily, understanding the assumptions that support meta-hypotheses of meta-learners is not necessarily enough to guarantee that stopping condition decision criteria lead to highly optimized policies, and as such, meta-learner training is often executed as a few-shot learning problem, where machine learning practitioners allow a meta-optimization procedure to run for a few iterations and then pause to observe the effects of the meta-optimization and consider the quality of meta-learner optimizations depending on the context of subjected base policies. In this way researchers can intermittently certify that foundational meta-hypotheses assumptions that support the meta-learner are resulting in highly optimized learned policies, or also even test whether different initializations of meta-learner and base-policy attributes and methods may lead to notably different optimized policies.
Example! In meta-learning, we can think of a meta-learner in reinforcement learning as simply being responsible for re-instantiating a base policy class object. A meta-learner is not necessarily always responsible for modifying the base object methods, but meta-learner methods can re-instantiate the policy class base object to alter the base object attributes until the base object attribute initialization that yields the best policy performance for the base object is found. The policy class base object itself is blind to its own policy performance, but this performance is monitored by the meta-object to catalog a historical memory of base object instantiations in order to initialize optimized future instantiations of the subjected base object.
One of the best features of computers is the functional utility of their reading and writing facilities. Computers are an amazing extension of humanity. Computers perform large calculations much more efficiently than humans because they can read information (i.e. process rules and input data) much faster than humans can. It is this same reading faculty enables computers to write text, for example, to save new data in the CPU memory.
As both humans and computers can read and write, an important distinction to recognize is the creative writing capacity of humans. I conjecture that human self-awareness gives us an ability to define both ourselves and our novel ideas in a way that makes us unique, collectively and individually. Yes, computers can perform some tasks much more efficiently than humans, but ultimately, computers are limited by their inability to define themselves and novel ideas.
A computer will always need to read yesterday’s dictionary to answer the question “What is the meaning of this?”. A human can develop, define and test a new theory today, tomorrow and every new day after.
Computer evolution is dependent on novel data from humanity. Self-regulation and optimization can allow computers to develop self-guidance but it seems that human governance and refereeing is a keystone of novel computing and coding applications.
Meta-instances can learn to navigate topologies of meta-data concerning base instantiations, but presently it seems that human intuition and novel thinking may be needed to introduce (meta-) instances to engage the inductive bias topology of higher order (meta-) meta-instantiations (this is what is meant by meta-ification: the defining, creating and instantiating of meta-instantiations).
Many tasks are best performed by meta-instances, but an appreciable portion of these tasks, particularly difficult computing tasks of the future may require higher order meta-meta-instantiations and it seems human cognition may be the best catalyst for computing algorithms to make the jump from lower order meta-instances to higher order meta-meta-instantiations—expecting the computer to optimize itself through meta-ification seems like a lot to ask. Imagine a futuristic object oriented programming language that releases its own code and documentation updates every year with newly defined and redefined object types and methods.
Of course, this is a tough example because the performance of programming languages is a balance across ease of use, ease of compilation, and total functionality via a composition of imperative, declarative, and functional programming and so here it seems that human design perspective provides much needed oversight for developing future iterations of computer programming languages, computer operating systems and computer programs.
But really, this is the point, for a computer, meta implicates state, such as meta-data status. For a human, meta implicates a state of action—action best exemplified by the term meta-ification coined in this week’s volume.
A base object’s attributes and methods contain little information on how to initialize a meta-object. While a base object may be capable of reporting meta-data about what attributes and methods need further optimization, for now, it seems the meta-ification of an object is best conducted by a human.
Even if a base object is responsible for reporting information on which base attributes and methods need what optimization when, where and why, it is somewhat unreasonable to think that a base object’s attributes and methods would have valuable information on how to optimize those very same attributes and methods. After all, the how of the optimization performed by the subjected meta-object is described by the imperative programming that defines meta-object method and attribute operations.
The concept of meta-inquiry represents a profound leap in both human and computational understanding. By recognizing the potential of self-reference with meta-data, we open new pathways for sophisticated learning and optimization. While computers can mimic certain aspects of human cognition, the nuanced and deeply introspective nature of human intelligence remains unparalleled.
Meta-inquiry isn’t just about understanding data and meta-data; it’s about comprehending the process of understanding itself. This recursive examination allows us to continually refine our questions, hypotheses, and methodologies. As we explore meta-inquiry further, we are reminded of the unique capabilities that humans bring to the table—self-awareness, creativity, and the ability to synthesize complex ideas.
Meta-inquiry’s foundation for self-conscious governance is still being uncovered, and it promises to bring exciting developments. By leveraging human ingenuity, and the evolving power of AI, we can redefine old and new hypotheses and reform our understanding of the boundaries of plausibility and tractability in computing, leading the way to a climate where self-awareness drives us toward greater knowledge and innovation.
Leave a comment