Agreeable Instantiations V 8: Meta-Inquiry. Define Meta? (Current Draft)

The computer, like humans, can read and write. In computer programming and specifically in object oriented programming languages, computer programs employ methods that operate on data as attributes, assembling objects that interact with each other as the building blocks or atomic elements of a given program.

Objects have attributes and methods. Very often, object method operations are capable of reading and writing object attributes. As computers become more advanced, meta-ification, the capitalization of logical operations on self-referential data, has emerged as a powerful force to accelerate the performance of computer algorithms. While the meta-algorithmic paradigm is an exciting one, it is unlikely that computers, and by extension algorithms, will one day read and write themselves completely independently. Humans will always be needed to read and write algorithms, but algorithms can read and write other algorithms and this is an introduction to meta-data, meta-inquiry, meta-hypotheses and meta-learning.

There is an important distinction between human intelligence and AI. AI systems are a collection of methods that operate on input data as attributes comprising the system’s internal state. Human intelligence is an assembly of attributes and methods (read: neurons, DNA and biology) that operate on external and internal data rendering a symbiotic mosaic of states governed by a biological state of action. For the computer, meta-instantiation primarily indicates the presence of meta-instances housing meta-data describing the empirical and theoretical state-space of base objects and base datum. For a human, meta-instantiation indicates a state of action, creating and defining a unique meta-object/meta-instance that is capable of initializing and accessing a memory of base objects belonging to a defined category as a collection of meta-data; indicative of more of a state of action. A computer programmer may dictate a state of action over the state(s) of a computer program, particularly through the definition, creation and instantiation of meta-instantiations.

Concerning the topic of meta-instantiation, computation identifies meta-status while humanity seeks meta-operation. This distinction suggests that while computers stabilize representations, humans redefine them.

Human intelligence allows for remarkable hierarchical recursive reflection on our environment. Not only is human cognitive recursion reflexive, but this recursion enables successive acts of hierarchical transcendence, creating new hierarchies that fully contain old ones.

By contrast, computational recursive reflection is discrete and bounded in hierarchical order, even while rendering increasingly detailed representations of data. As often as computability is unattainable, Computational recursive reflection can rarely independently provide guarantees about its own consistency, or fully account for its own inductive bias.

For this is why humanity ought to rely on computers as an important extension of our meta-capacity! And reciprocally, also why computers ought to learn to think of human cognition as an important extension of computational reflective limits!

In this way, it seems that in the meta-computing paradigm, meta-algorithms are like object oriented attributes/data/state(s), and meta-optimizations are like object oriented methods/procedures/state(s) of action which are capable of altering the attributed meta-algorithms and is importantly catalyzed by humanity as the responsible practitioners of self-awareness and self-reference.

As such, let us embark on a journey of meta-inquiry to explore the universe of self-awareness and self-reference.

Returning to the title of this week’s volume seems fitting:

Meta-Inquiry. Define Meta?

Meta: Showing explicit awareness of oneself as a member of its category, self-referential.

The concept of self-reference as the basis of meta-instantiation is an important one. The human capacity for self-consciousness is a powerful one unlike many others observed in nature. Consider that the human brain is a rare example of consciousness that studies itself, true self-consciousness indeed. Thus, on the topic of the theory of meta-inquiry it seems the human brain may be a valuable source of information.

Concretely, meta-inquiry can be defined as inquiry about inquiry. Some examples are given below:

  1. Which lines of inquiry are self-sustaining?
  2. Which lines of inquiry have a latent ordering?

As the questions above have lines of inquiry as their very subjects, it is understood that meta-inquiry requires meta-data. Any sufficient answer to these questions should have an associated collection or memory of base-data identifying which lines of inquiry are valid or invalid according to the meta-inquiry.

In this volume, I will use ‘meta’ in three closely related but distinct senses:
(1) Representational: data about data (meta-data),
(2) Operational: processes that act on base processes (meta-algorithms),
(3) Reflective: systems capable of referencing themselves as members of a category.

Computational meta-learners crucially must employ computational reflection for effective operation. The notion of Computational Reflection is explored in great detail in P. Maes’ thesis (https://maxapress.com/data/article/ker/preview/pdf/S0269888900004355.pdf)

Having introduced meta-instantiation conceptually, in what follows, this volume will consider meta-instances from a computational perspective.

Meta-instantiation is a powerful concept. Meta-instantiation can refer to the actual process of allocating and saving meta-data in a computer memory, or it can be used to simply identify a novel meta-object which governs base-objects in order to introduce a certain level of self-referential functionality.

If I name a meta-instance, what is the definition of this meta-instance as implicated by the definition of the base instance?

Meta-instantiation is a compelling exploration of meaning. Any meta-instantiation should house and communicate meta-data. This self-referential recording of information provides much needed support for inquiry into meaning. Self-reference introduces self-consciousness to functionality. Whether base subject identities are completely defined or not, meta-instantiations of subject implicate a certain kind of self-awareness. In computing, this awareness may be limited to meta-instances awareness of base instantiations.

More explicitly, many meta-instances have meta-meta-instantiations, however while a human may be aware of meta-instances and higher order meta-meta-instantiations, meta-instances instantiated in the computer’s memory are only functionally aware of lower order base instantiations. 

Meta-data is a key element of meta-instantiation. Meta-data informs us of a base data source but contains no data from the base data. It is data about data. Ex: The titles (and intended meaning thereof) of columns and rows. The number of entries (total). The number of entries (by distinct grouping(s)).

For instance, a dataset is base data; its schema—column names, types, and relationships—is meta-data. A system that modifies schemas based on usage patterns operates at the meta-level.

If the base data was presented without meta-data, it would fail to communicate anything about itself.

For example, if I gave you two copies of the same base data in different column or row orderings, can we determine the difference between the two orderings aside from the fact that it is a permutation without the meta-data? It is not likely. Even if it is possible, such as the case of a dictionary where one column is a list of words and another column is a list of their definitions, depending on the size of the dataset, it should still take a lot longer to process what the base-data communicates without the presence of meta-data.

The data becomes self-conscious, insofar as meta-data, when paired with base data, enables a dataset to communicate about itself. 

In the functionally self-referential way that meta-data communicates about itself, meta-inquiry questions itself.

Novel information can always enter from externally joined parties. For example, our base-data is joined with new data yielding an updated meta-data rundown that describes our new base-data source and changes to the base-dataset, or our most recent inquiry is answered yielding another question whose answer will require future results that build on current knowledge.

Thus, meta-data and meta-inquiry are not solely self informed, but rely on correctly instantiated memory of subjected base-data for communication about themselves.

Understanding the inclination to explore meta-data, meta-inquiry and meta-learning, it may be wise to investigate a theory of meta-hypotheses which provide foundational assumptions and principles to guide the design of meta-instantiations capable of meta-learning.

With this in mind, I would like to pose a question: How are questions and hypotheses linked and what does this say about meta-hypotheses under the shadow of meta-learning?

I believe that at the heart of every hypothesis studied under the scientific method lies a question: Is the (null) hypothesis true?

Or more precisely, from the Frequentist perspective, how likely is the data given the hypothesis? And conversely, from the Bayesian perspective, how likely is the hypothesis given the data?

Both perspectives interrogate hypotheses of subject, but any claiming to know the totality of truth in any given hypothesis will be limited by the availability of data. These are drawbacks hinted at by the results of Godel’s incompleteness theorems concerning the finitude of facts produceable by a logical system and the proof of consistency of a logical system residing outside of that very same identified logical system.

This limitation is perhaps a corollary of the existence of a comprehensive definition of Meta in the dictionary today, without a comprehensive understanding of the implications of tomorrow’s diction. Definitions that appear complete today are inevitably provisional—bounded by present knowledge and liable to be redefined as new forms of knowledge and inquiry emerge.

Today’s Meta-Inquiry is not guaranteed to find us tomorrow conveniently. When testing meta-hypotheses and the undercurrent of meta-inquiries that prompt them, we are reminded to order and re-order any and all credentials of consistency we can furnish as a third party to the hypothesis (model/theory), and the empirical data.

Meta-hypotheses are understood as hypotheses about hypotheses. Importantly, these meta-hypotheses should, by definition, incorporate meta-data about subjected base-hypotheses in order to begin inquiry into valuation of subjected base-hypotheses as valid or not. 

I believe meta-hypotheses are an important guide to meta-learning, aiding computers in learning to learn. Meta-data about subjected base policies and their inductive bias form the basis of meta-hypotheses which seek to quantify meta-learner confidence in base-policies. Successful meta-hypotheses catalog and systematize subjected base policies and navigate the inductive bias topology of this catalog to yield high confidence and high performance policies. 

It is worth mentioning that a great example of meta-inquiry is the design of meta-objects in object oriented programming. As objects belonging to a given class are instantiated in computer memory, meta-objects operate on base objects with knowledge about how to iteratively alter base object procedures runtime behavior. Meta-object instantiation initializes information on subjected base object methods and attributes. The given meta-object class, by definition, owns methods to alter subjected base object methods/procedures as well as base object data/attributes. Base object instantiations initialize data on base object attributes with the given base object class, by definition, owning methods to alter only the very same base object instantiation attributes. Thus, meta-object instantiation operations on base objects require logical reasoning about meta-data, implicating base object operation definitions to purposefully alter base object methods and attributes in a way that base objects alone cannot without the logical capacity to access, communicate and reason about meta-data concerning the identified subjected base objects themselves.

A sufficiently interested and performance oriented computer programmer may ask:

Which meta-objects are self-sufficient enough to exist in running code without human intervention?

I believe this question is strongly linked to the meta-inquiry: Which lines of inquiry are self-sustaining enough to complete themselves?

Importantly, computerized meta-instantiation can require effective stopping conditions to prevent these programs from becoming runaway trains, this is why the concession about lines of inquiry that complete themselves is notable.

It is seemingly possible that self-sustaining lines of inquiry about base-object data/attributes should yield valuable hypotheses concerning the design of self-sufficient meta-object instances capable of altering base-object methods and attributes without human oversight.

Understanding that meta-learning procedures have associated meta-hypotheses which are supported by foundational assumptions that contribute to the inductive bias topology cataloged by meta-learning procedures can be valuable for realizing effective stopping conditions for meta-learning procedures which hope to execute computing tasks without human intervention. Contrarily, understanding the assumptions that support meta-hypotheses of meta-learners is not necessarily enough to guarantee that stopping condition decision criteria lead to highly optimized policies, and as such, meta-learner training is often executed as a few-shot learning problem, where machine learning practitioners allow a meta-optimization procedure to run for a few iterations and then pause to observe the effects of the meta-optimization and consider the quality of meta-learner optimizations depending on the context of subjected base policies. In this way researchers can intermittently certify that foundational meta-hypotheses assumptions that support the meta-learner are resulting in highly optimized learned policies, or also even test whether different initializations of meta-learner and base-policy attributes and methods may lead to notably different optimized policies.

Example! In meta-learning, we can think of a meta-learner in reinforcement learning as simply being responsible for re-instantiating a base policy class object. A meta-learner is not necessarily always responsible for modifying the base object methods, but meta-learner methods can re-instantiate the policy class base object to alter the base object attributes until the base object attribute initialization that yields the best policy performance for the base object is found. The policy class base object itself is blind to its own policy performance, but this performance is monitored by the meta-object to catalog a historical memory of base object instantiations in order to initialize optimized future instantiations of the subjected base object. 

One of the best features of computers is the functional utility of their reading and writing facilities. Computers are an amazing extension of humanity. Computers perform large calculations much more efficiently than humans because they can read information (i.e. process rules and input data) much faster than humans can. It is this same reading faculty enables computers to write text, for example, to save new data in the CPU memory.

As both humans and computers can read and write, an important distinction to recognize is the creative writing capacity of humans. I conjecture that human self-awareness gives us an ability to define both ourselves and our novel ideas in a way that makes us unique, collectively and individually. Yes, computers can perform some tasks much more efficiently than humans, but ultimately, computers are limited by their inability to define themselves and novel ideas.

A computer will always need to read yesterday’s dictionary to answer the question “What is the meaning of this?”. A human can develop, define and test a new theory today, tomorrow and every new day after.

Computer evolution is dependent on novel data from humanity. Self-regulation and optimization can allow computers to develop self-guidance but it seems that human governance and refereeing is a keystone of novel computing and coding applications.

Meta-instances can learn to navigate topologies of meta-data concerning base instantiations, but presently it seems that human intuition and novel thinking may be needed to introduce (meta-) instances to engage the inductive bias topology of higher order (meta-) meta-instantiations (this is what is meant by meta-ification: the defining, creating and instantiating of meta-instantiations).

Many tasks are best performed by meta-instances, but an appreciable portion of these tasks, particularly difficult computing tasks of the future may require higher order meta-meta-instantiations and it seems human cognition may be the best catalyst for computing algorithms to make the jump from lower order meta-instances to higher order meta-meta-instantiations—expecting the computer to optimize itself through meta-ification seems like a lot to ask. Imagine a futuristic object oriented programming language that releases its own code and documentation updates every year with newly defined and redefined object types and methods.

Of course, this is a tough example because the performance of programming languages is a balance across ease of use, ease of compilation, and total functionality via a composition of imperative, declarative, and functional programming and so here it seems that human design perspective provides much needed oversight for developing future iterations of computer programming languages, computer operating systems and computer programs.

But really, this is the point, for a computer, meta implicates state, such as meta-data status. For a human, meta implicates a state of action—action best exemplified by the term meta-ification coined in this week’s volume.

A base object’s attributes and methods contain little information on how to initialize a meta-object. While a base object may be capable of reporting meta-data about what attributes and methods need further optimization, for now, it seems the meta-ification of an object is best conducted by a human.

Even if a base object is responsible for reporting information on which base attributes and methods need what optimization when, where and why, it is somewhat unreasonable to think that a base object’s attributes and methods would have valuable information on how to optimize those very same attributes and methods. After all, the how of the optimization performed by the subjected meta-object is described by the imperative programming that defines meta-object method and attribute operations.

This raises an important meta-inquiry of meta-computation in its own right—one that dares to require total resolution at the transcendental level it problematizes—not just what we know, but how we know.

Knowing How. How is it that we know how? 

Measuring How (To What Degree). How is that we determine the degree of “how”—the level of detail of completeness of a description? How is it that we know how detailed something is? That’s typically simple enough, the integers provide enough cardinality to give any finite description. But while finite structures admit enumeration, measurement alone does not constitute understanding.

Reflecting on How. How is it that we know how we think? How is it that we render our own cognition as an object subject to contemplation, such that we claim to know how we think.

Guaranteeing How. How is it that we know how to implement something so that we can guarantee we always know how right it is? And what would such a guarantee ultimately depend on?

This inquiry into the existentiality of how —how “how” exists—is a space whose boundary is best outlined and described by a dependence on datum which exist as a representation of object, requiring interpretation to define meaning.

Generating How. How is it that we know how to implement new logic? How is it that we know how to implement every possible logical instantiation of a chosen process? Or does the space of “how” exceed the structures that attempt to formalize it in a strict logical and computational sense?

Grounding How. Human cognition does study itself, and human cognition does create models of thought but can we really determine when these theories of thought, human or otherwise, are correct? Formal logical axiomatized systems have proofs of consistency, but these proofs are not housed entirely within the systems they justify. Does human cognition encounter a similar boundary?

The human capacity to find understanding is one to be admired. If we always knew how what was, is and will be were, are and are coming to be, we could lead a life without error. But despite how worrisome, frustrating and dangerous error can be, I believe error is a philosophical and scientific subjective necessity. We know how we perceive reality is mediated through representation, and we know how the medium of representation is susceptible to distortion. Ergo, on the subject of how, it seems we fail to know the true complete object. 

This limitation becomes especially apparent when we attempt to reason about phenomena through their limiting cases—at boundaries, at singular points, or at instants—where the structures that support meaning begin to lose traction, for better or worse.

The infinitesimal limits of converging sequences in the study of Calculus provide the foundation for calculating derivatives, the rate of change of a function at a single point, given by the slope of the tangent line to the function at that very point. Calculus constitutes a modern resolution of Zeno’s paradox, by defining instantaneous velocity through limits.

Still, the consideration of singular points or instants of time is difficult. A boundary at such a point/instant identifies an interesting perspective, the boundary separates “before” and “after”, but is not clearly part of either. Similarly, the perspective of an object’s interaction with such a boundary is difficult, is the object at the boundary? Passing through it? Neither? Or both? Surely there is an answer, but the closer the boundary gets, the more cognition and computation can only provide partial answers.

In the previous volume, we examined the difficulty of reasoning about motion at instants of time. That analysis revealed a broader tension between continuous phenomena and discrete representations. This volume extends that tension into the domain of meta-inquiry. If the previous volume showed that motion cannot so easily be reduced to instants, this volume lends to the notion that understanding cannot be reduced to any single level of description.

Error is how we can guarantee we will never totally know how we know —this is a fact of existence just as empowering as all other knowable truth(s) in the universe.

Consider the Uncertainty principle in Physics, where conjugate quantities (such as position and momentum) impose lower bounds on joint precision. The existentiality of error reveals that we can never know how fast we are moving if we know how close we are to a fixed point. And conversely reveals that we can never know how close we are to a fixed point if we know how fast we are moving.

Error is fact that is well accompanied by meta-facts.

In computational systems, error is not merely tolerated but actively structured. Error correction mechanisms—from try-except control flow in programming, which anticipates and handles failure at runtime, to error-correcting codes, which encode redundancy to recover from corrupted information—demonstrate that error is not simply an absence of correctness, but a condition that can be detected, managed, and, to some extent, reversed.

Yet even these systems rely on assumptions about the structure of error itself, suggesting that computational correction is always local—bounded by the structure of very uncertainties it seeks to resolve.

Cognitive error correction does not amount to ultimate universal truth. If fact is meta-physically independent of cognition, we cannot generally know how right or wrong we are, much less how we are right or wrong about how right we are. Any notion of cognitive correctness is a non-meta-physical arbitrary approximation of degrees.

If we always knew how—fully and without remainder—error would seem to vanish. Yet error persists. Indeed, it may be necessary. How is it that we never know how much uncertainty is present? How important is it that we never know how much uncertainty is present?

Meta-computation, then, cannot fully account for how without presupposing a data dependent, locally optimized description of the capacity to do such. This is a feature of logical axiomatized systems captured by Godel’s incompleteness theorems—no finite system can generate the totality of all facts of existence concerning either the universe of the self or the universe of the other.

Human DNA and biological systems bear a striking resemblance to computer operating systems. Yet, at present, it is humanity that defines the very theories of thought—no other cognitive entity can yet fully ground an account of its own cognition.

Meta refers to the capacity of a system to represent, evaluate and reconfigure another system—or itself—by operating at a higher level of abstraction that can subsume the lower.

Even with computers superior computing capacity to humans, human beings become such through themselves and all other beings in ways that no other being does.

The burden of creation belongs to the creator of the universe. The burden of cognition belongs to humanity. The burden of computation belongs to computers. And yet, no burden is absolute: cognition exceeds humanity alone, just as computations exceed the machinations alone that perform them. In the same way that humanity cannot be solely responsible for cognitive ability, computers cannot be solely responsible for computational ability. 

Meta-inquiry does not resolve uncertainty—it organizes it. It does not eliminate error—it renders it legible. And in doing so, it defines the boundary between systems that merely compute and systems that understand.

The concept of meta-inquiry represents a profound leap in both human and computational understanding. By recognizing the potential of self-reference with meta-data, we open new pathways for sophisticated learning and optimization. While computers can mimic certain aspects of human cognition, the nuanced and deeply introspective nature of human intelligence remains unparalleled.

Meta-inquiry isn’t just about understanding data and meta-data; it’s about comprehending the process of understanding itself. This recursive examination allows us to continually refine our questions, hypotheses, and methodologies. As we explore meta-inquiry further, we are reminded of the unique capabilities that humans bring to the table—self-awareness, creativity, and the ability to synthesize complex ideas.

Meta-inquiry’s foundation for self-conscious governance is still being uncovered, and it promises to bring exciting developments. By leveraging human ingenuity, and the evolving power of AI, we can redefine old and new hypotheses and reform our understanding of the boundaries of plausibility and tractability in computing, leading the way to a climate where self-awareness drives us toward greater knowledge and innovation.

Leave a comment