Publications

Integrated Information Theory (IIT) has been in development since the early 1990s. Below is a growing list of articles that present the theory in its mature form, as reflected in the mathematical formalism and terminology used in "IIT 4.0" (Albantakis et al. 2023) and throughout the IIT Wiki. 

The Development of IIT section presents the most important publications that led up to the current formalism (IIT 4.0).

For a more exhaustive list of IIT publications, click here.

Essential readings

Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms

Albantakis, L., Barbosa, L.S., Findlay, G., Grasso, M., Haun, A.M., Marshall, W., Mayner, W.G.P., Zaeemzadeh, A., Boly, M., Juel, B.E., Sasai, S., Fujii., K., David I., Hendren, J., Lang, J.P., Tononi, G. (2023). PLoS Computational Biology, 19(10), e1011465. 

Provides the mature version of the theory with its mathematical formalism.

Abstract: This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions concerning both the presence and the quality of experience, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate formulation of the axioms as postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system’s irreducible cause–effect power, the distinctions and relations specified by a substrate can account for the quality of experience.

Consciousness and the fallacy of misplaced objectivity 

Ellia, F., Hendren, J., Grasso, M., Kozma, C., Mindt, G., Lang, J.P., Haun, A.M., Albantakis, L., Boly, M., and Tononi, G. (2021). Neuroscience of Consciousness, 2021(2), niab032. 

Argues that a science of consciousness needs to account for the subjective properties of experience, not for its behavioral, functional, or neural correlates.

Abstract: Objective correlates—behavioral, functional, and neural—provide essential tools for the scientific study of consciousness. But reliance on these correlates should not lead to the ‘fallacy of misplaced objectivity’: the assumption that only objective properties should and can be accounted for objectively through science. Instead, what needs to be explained scientifically is what experience is intrinsically—its subjective properties—not just what we can do with it extrinsically. And it must be explained; otherwise the way experience feels would turn out to be magical rather than physical. We argue that it is possible to account for subjective properties objectively once we move beyond cognitive functions and realize what experience is and how it is structured. Drawing on integrated information theory, we show how an objective science of the subjective can account, in strictly physical terms, for both the essential properties of every experience and the specific properties that make particular experiences feel the way they do.

Integrated information theory: From consciousness to its physical substrate

Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Nature Reviews Neuroscience, 17(7), 450461.

Reviews how IIT explains several aspects of the relationship between consciousness and the brain, offers various counterintuitive empirical predictions, and provides a path to assessing consciousness in non-communicative patients.

Abstract: In this Opinion article, we discuss how integrated information theory accounts for several aspects of the relationship between consciousness and the brain. Integrated information theory starts from the essential properties of phenomenal experience, from which it derives the requirements for the physical substrate of consciousness. It argues that the physical substrate of consciousness must be a maximum of intrinsic cause–effect power and provides a means to determine, in principle, the quality and quantity of experience. The theory leads to some counterintuitive predictions and can be used to develop new tools for assessing consciousness in non-communicative patients.

Why does space feel the way it does? Towards a principled account of spatial experience

Haun, A.M., Tononi, G. (2019). Entropy, 21(12), 1160.

Accounts for the phenomenal properties of spatial experiences through properties of the cause–effect structure unfolded from a grid-like substrate.

Abstract: There must be a reason why an experience feels the way it does. A good place to begin addressing this question is spatial experience, because it may be more penetrable by introspection than other qualities of consciousness such as color or pain. Moreover, much of experience is spatial, from that of our body to the visual world, which appears as if painted on an extended canvas in front of our eyes. Because it is ‘right there’, we usually take space for granted and overlook its qualitative properties. However, we should realize that a great number of phenomenal distinctions and relations are required for the canvas of space to feel ‘extended’. Here we argue that, to be experienced as extended, the canvas of space must be composed of countless spots, here and there, small and large, and these spots must be related to each other in a characteristic manner through connection, fusion, and inclusion. Other aspects of the structure of spatial experience follow from extendedness: every spot can be experienced as enclosing a particular region, with its particular location, size, boundary, and distance from other spots. We then propose an account of the phenomenal properties of spatial experiences based on integrated information theory (IIT). The theory provides a principled approach for characterizing both the quantity and quality of experience by unfolding the cause–effect structure of a physical substrate. Specifically, we show that a simple simulated substrate of units connected in a grid-like manner yields a cause–effect structure whose properties can account for the main properties of spatial experience. These results uphold the hypothesis that our experience of space is supported by brain areas whose units are linked by a grid-like connectivity. They also predict that changes in connectivity, even in the absence of changes in activity, should lead to a warping of experienced space. To the extent that this approach provides an initial account of phenomenal space, it may also serve as a starting point for investigating other aspects of the quality of experience and their physical correspondents.

Why does time feel the way it does?

Comolatti, R., Grasso, M., & Tononi, G. (2024). arXiv, 2024-12

Accounts for the phenomenal properties of temporal experiences through properties of the cause–effect structure unfolded from a substrate organized as a directed grid.

Abstract: Why does time feel flowing? Why does every moment feel directed, flowing away from us toward the past? Why do we have an experience of the present as a moment, and why does it feel composed of various smaller moments within it? In this paper, we use the formalism of IIT to account for why time feels flowing. First, we characterize the phenomenology of time, defining the explanandum we aim to account for: a phenomenal structure of distinctions and relations between them that we call phenomenal flow. Then, we propose an account of phenomenal flow in physical terms in a way that is principled and testable, linking time phenomenology to a particular physical substrate (directed 1D grids) and to plausible circuits in the brain. IIT establishes an explanatory identity between the properties of experience and the properties of the cause–effect structure specified by its substrate. Applied to the experience of time, we show how the properties of phenomenal flow—moments, the way they feel directed, the way they relate with each other, the way they compose the present, among others—can be accounted for in physical terms by the cause–effect structure specified by directed grids: by the causal distinctions that compose the cause–effect structure and the specific ways they relate to compose a flow.

Dissociating artificial intelligence from artificial consciousness

Findlay, G., Marshall, W., Albantakis, L., David, I., Mayner, W.G.P., Koch, C., Tononi, G. (2024). arXiv:2412.04571.

Challenges computational-functionalist approaches to consciousness by using the IIT formalism to demonstrate that it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience.

Abstract: Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, having the same cognitive abilities, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which---a basic stored-program computer---simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.

Only what exists can cause: An intrinsic view of free will

Tononi, G., Albantakis, A., Boly, M., Cirelli, C., Koch, C. (2022). arXiv: 2206.02069. 

Argues, based on IIT’s intrinsic powers ontology, that we—not our neurons or atoms—are the true causes of our willed actions, for which we bear true responsibility.

Abstract: This essay addresses the implications of integrated information theory (IIT) for free will. IIT is a theory of what consciousness is and what it takes to have it. According to IIT, the presence of consciousness is accounted for by a maximum of cause-effect power in the brain. Moreover, the way specific experiences feel is accounted for by how that cause-effect power is structured. If IIT is right, we do have free will in the fundamental sense: we have true alternatives, we make true decisions, and we - not our neurons or atoms - are the true cause of our willed actions and bear true responsibility for them. IIT's argument for true free will hinges on the proper understanding of consciousness as true existence, as captured by its intrinsic powers ontology: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause.

Mathematical formalism

The mature formalism of IIT is captured in "IIT 4.0" (Albantakis et al. 2023). Below are key papers expanding on specific aspects of the formalism.

A measure for intrinsic information

Barbosa, L. S., Marshall, W., Streipert, S., Albantakis, L., & Tononi, G. (2020). Scientific Reports, 10(1), 18803.

Introduces a unique information measure from the intrinsic perspective of a system, which satisfies IIT’s postulates of existence, intrinsicality, and information.

Abstract: We introduce an information measure that reflects the intrinsic perspective of a receiver or sender of a single symbol, who has no access to the communication channel and its source or target. The measure satisfies three desired properties—causality, specificity, intrinsicality—and is shown to be unique. Causality means that symbols must be transmitted with probability greater than chance. Specificity means that information must be transmitted by an individual symbol. Intrinsicality means that a symbol must be taken as such and cannot be decomposed into signal and noise. It follows that the intrinsic information carried by a specific symbol increases if the repertoire of symbols increases without noise (expansion) and decreases if it does so without signal (dilution). An optimal balance between expansion and dilution is relevant for systems whose elements must assess their inputs and outputs from the intrinsic perspective, such as neurons in a network.

System integrated information

Marshall, W., Grasso, M., Mayner, W.G.P., Zaeemzadeh, A., Barbosa, L.S., Chastain, E., Findlay, G., Sasai, S., Albantakis, L., Tononi, G. (2023). Entropy, 25(2), 334.

Proposes a measure of system integrated information (φs) and explores how this measure is impacted by determinism, degeneracy, and “fault lines” in connectivity.

Abstract: Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause–effect structure unfolded from a maximally irreducible substrate (a Φ-structure). In this work we introduce a definition for the integrated information of a system (𝜑𝑠) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system-integrated information. We then demonstrate how the proposed measure identifies complexes as systems, the 𝜑𝑠 of which is greater than the 𝜑𝑠 of any overlapping candidate systems.

Intrinsic units: Identifying a system's causal grain

Marshall, W., Findlay, G., Albantakis, L., & Tononi, G. (2024). arXiv, 2024-04.

Shows how to identify a system’s intrinsic units and demonstrates that the cause-effect power of a system of macro units can be higher than the cause-effect power of the corresponding micro units.

Abstract: Integrated information theory (IIT) aims to account for the quality and quantity of consciousness in physical terms. According to IIT, a substrate of consciousness must be a system of units that is a maximum of intrinsic, irreducible cause-effect power, quantified by integrated information (𝜑𝑠). Moreover, the grain of each unit must be the one—from micro (finer) to macro (coarser)—that maximizes the system’s intrinsic irreducibility (i.e., maximizes 𝜑𝑠). The units that maximize 𝜑𝑠 are called the intrinsic units of the system. This work extends the mathematical framework of IIT 4.0 to assess cause-effect power at different grains and thereby determine a system’s intrinsic units. Using simple, simulated systems, we show that the cause-effect power of a system of macro units can be higher than the cause-effect power of the corresponding micro units. Two examples highlight specific kinds of macro units, and how each kind can increase cause-effect power. The implications of the framework are discussed in the broader context of IIT, including how it provides a foundation for tests and inferences about consciousness.

Upper bounds for integrated information

Zaeemzadeh, A., & Tononi, G. (2024a). PLOS Computational Biology, 20(8), e1012323.

Presents the theoretical upper bound of the integrated information of causal distinctions (φd) and their relations (φr), and thus for structure integrated information (Φ, the sum of φd and φr). 

Abstract: Originally developed as a theory of consciousness, integrated information theory provides a mathematical framework to quantify the causal irreducibility of systems and subsets of units in the system. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information.

Empirical validation (explanatory & predictive power)

Below are the most important empirical IIT papers. For a more thorough list,
see Part II: Empirical Validation

Consciousness and sleep

Tononi, G., Boly, M., & Cirelli, C. (2024). Neuron, 112(10), 15681594.

Reviews the neurophysiological differences between dreaming and dreamless sleep, shedding light on the substrate of consciousness.

Abstract: Sleep is a universal, essential biological process. It is also an invaluable window on consciousness. It tells us that consciousness can be lost but also that it can be regained, in all its richness, when we are disconnected from the environment and unable to reflect. By considering the neurophysiological differences between dreaming and dreamless sleep, we can learn about the substrate of consciousness and understand why it vanishes. We also learn that the ongoing state of the substrate of consciousness determines the way each experience feels regardless of how it is triggered—endogenously or exogenously. Dreaming consciousness is also a window on sleep and its functions. Dreams tell us that the sleeping brain is remarkably lively, recombining intrinsic activation patterns from a vast repertoire, freed from the requirements of ongoing behavior and cognitive control.

Breakdown of cortical effective connectivity during sleep

Massimini, M., Ferrarelli, F., Huber, R., Esser, S. K., Singh, H., & Tononi, G. (2005). Science, 309(5744), 2228–2232. 

Uses a causal approach by combining transcranial magnetic stimulation and high-density EEG to show that the fading of consciousness during certain stages of sleep may be related to a breakdown in cortical effective connectivity. 

Abstract: When we fall asleep, consciousness fades yet the brain remains active. Why is this so? To investigate whether changes in cortical information transmission play a role, we used transcranial magnetic stimulation together with high-density electroencephalography and asked how the activation of one cortical area (the premotor area) is transmitted to the rest of the brain. During quiet wakefulness, an initial response (∼15 milliseconds) at the stimulation site was followed by a sequence of waves that moved to connected cortical areas several centimeters away. During non–rapid eye movement sleep, the initial response was stronger but was rapidly extinguished and did not propagate beyond the stimulation site. Thus, the fading of consciousness during certain stages of sleep may be related to a breakdown in cortical effective connectivity.

Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness

Ferrarelli, F., Massimini, M., Sarasso, S., Casali, A., Riedner, B. A., Angelini, G., ... & Pearce, R. A. (2010). Proceedings of the National Academy of Sciences, 107(6), 26812686. 

Uses transcranial magnetic stimulation together with high-density EEG to show that the breakdown of cortical effective connectivity may underlie loss of consciousness induced by pharmacologic agents. 

Abstract: By employing transcranial magnetic stimulation (TMS) in combination with high-density electroencephalography (EEG), we recently reported that cortical effective connectivity is disrupted during early non-rapid eye movement (NREM) sleep. This is a time when subjects, if awakened, may report little or no conscious content. We hypothesized that a similar breakdown of cortical effective connectivity may underlie loss of consciousness (LOC) induced by pharmacologic agents. Here, we tested this hypothesis by comparing EEG responses to TMS during wakefulness and LOC induced by the benzodiazepine midazolam. Unlike spontaneous sleep states, a subject’s level of vigilance can be monitored repeatedly during pharmacological LOC. We found that, unlike during wakefulness, wherein TMS triggered responses in multiple cortical areas lasting for >300 ms, during midazolam-induced LOC, TMS-evoked activity was local and of shorter duration. Furthermore, a measure of the propagation of evoked cortical currents (significant current scattering, SCS) could reliably discriminate between consciousness and LOC. These results resemble those observed in early NREM sleep and suggest that a breakdown of cortical effective connectivity may be a common feature of conditions characterized by LOC. Moreover, these results suggest that it might be possible to use TMS-EEG to assess consciousness during anesthesia and in pathological conditions, such as coma, vegetative state, and minimally conscious state.

A theoretically based index of consciousness independent of sensory processing and behavior

Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). Science Translational Medicine, 5(198), 198ra105198ra105.

Introduces and tests an index of the level of consciousness called the perturbational complexity index (PCI), calculated by perturbing the cortex with transcranial magnetic stimulation and then compressing the spatiotemporal pattern of the response to measure its algorithmic complexity.

Abstract: One challenging aspect of the clinical assessment of brain-injured, unresponsive patients is the lack of an objective measure of consciousness that is independent of the subject’s ability to interact with the external environment. Theoretical considerations suggest that consciousness depends on the brain’s ability to support complex activity patterns that are, at once, distributed among interacting cortical areas (integrated) and differentiated in space and time (information-rich). We introduce and test a theory-driven index of the level of consciousness called the perturbational complexity index (PCI). PCI is calculated by (i) perturbing the cortex with transcranial magnetic stimulation (TMS) to engage distributed interactions in the brain (integration) and (ii) compressing the spatiotemporal pattern of these electrocortical responses to measure their algorithmic complexity (information). We test PCI on a large data set of TMS-evoked potentials recorded in healthy subjects during wakefulness, dreaming, nonrapid eye movement sleep, and different levels of sedation induced by anesthetic agents (midazolam, xenon, and propofol), as well as in patients who had emerged from coma (vegetative state, minimally conscious state, and locked-in syndrome). PCI reliably discriminated the level of consciousness in single individuals during wakefulness, sleep, and anesthesia, as well as in patients who had emerged from coma and recovered a minimal level of consciousness. PCI can potentially be used for objective determination of the level of consciousness at the bedside.

Stratification of unresponsive patients by an independently validated index of brain complexity

Casarotto, S., Comanducci, A., Rosanova, M., Sarasso, S., Fecchio, M., Napolitani, M., ... & Massimini, M. (2016). Annals of Neurology, 80(5), 718-729.

Validates the Perturbational Complexity Index (PCI) as a reliable measure of consciousness in both healthy individuals and brain-injured patients, offering a method to stratify behaviorally unresponsive patients and detect possible cases of covert consciousness.

Abstract: 

Objective: Validating objective, brain-based indices of consciousness in behaviorally unresponsive patients represents a challenge due to the impossibility of obtaining independent evidence through subjective reports. Here we address this problem by first validating a promising metric of consciousness—the Perturbational Complexity Index (PCI)—in a benchmark population who could confirm the presence or absence of consciousness through subjective reports, and then applying the same index to patients with disorders of consciousness (DOCs).

Methods: The benchmark population encompassed 150 healthy controls and communicative brain-injured subjects in various states of conscious wakefulness, disconnected consciousness, and unconsciousness. Receiver operating characteristic curve analysis was performed to define an optimal cutoff for discriminating between the conscious and unconscious conditions. This cutoff was then applied to a cohort of noncommunicative DOC patients (38 in a minimally conscious state [MCS] and 43 in a vegetative state [VS]).

Results: We found an empirical cutoff that discriminated with 100% sensitivity and specificity between the conscious and the unconscious conditions in the benchmark population. This cutoff resulted in a sensitivity of 94.7% in detecting MCS and allowed the identification of a number of unresponsive VS patients (9 of 43) with high values of PCI, overlapping with the distribution of the benchmark conscious condition.

Interpretation: Given its high sensitivity and specificity in the benchmark and MCS population, PCI offers a reliable, independently validated stratification of unresponsive patients that has important physiopathological and therapeutic implications. In particular, the high-PCI subgroup of VS patients may retain a capacity for consciousness that is not expressed in behavior.

A perturbational approach for evaluating the brain's capacity for consciousness

Massimini, M., Boly, M., Casali, A., Rosanova, M., & Tononi, G. (2009). Progress in Brain Research, 177, 201214.

Starts from IIT-derived principles to propose an empirical approach to measuring consciousness whereby direct cortical perturbations and recordings are used to gauge the co-occurrence of integration and differentiation in thalamocortical circuits.  

Abstract: How do we evaluate a brain's capacity to sustain conscious experience if the subject does not manifest purposeful behaviour and does not respond to questions and commands? What should we measure in this case? An emerging idea in theoretical neuroscience is that what really matters for consciousness in the brain is not activity levels, access to sensory inputs or neural synchronization per se, but rather the ability of different areas of the thalamocortical system to interact causally with each other to form an integrated whole. In particular, the information integration theory of consciousness (IITC) argues that consciousness is integrated information and that the brain should be able to generate consciousness to the extent that it has a large repertoire of available states (information), yet it cannot be decomposed into a collection of causally independent subsystems (integration). To evaluate the ability to integrate information among distributed cortical regions, it may not be sufficient to observe the brain in action. Instead, it is useful to employ a perturbational approach and examine to what extent different regions of the thalamocortical system can interact causally (integration) and produce specific responses (information). Thanks to a recently developed technique, transcranial magnetic stimulation and high-density electroencephalography (TMS/hd-EEG), one can record the immediate reaction of the entire thalamocortical system to controlled perturbations of different cortical areas. In this chapter, using sleep as a model of unconsciousness, we show that TMS/hd-EEG can detect clear-cut changes in the ability of the thalamocortical system to integrate information when the level of consciousness fluctuates across the sleep-wake cycle. Based on these results, we discuss the potential applications of this novel technique to evaluate objectively the brain's capacity for consciousness at the bedside of brain-injured patients.

The neural correlates of dreaming

Siclari, F., Baird, B., Perogamvros, L., Bernardi, G., LaRocque, J. J., Riedner, B., Boly, M., Postle, B., & Tononi, G. (2017). Nature Neuroscience, 20(6), 872878.

Finds that in both NREM and REM sleep, reports of dream experience are associated with local decreases in low-frequency activity in posterior cortical regions, with high-frequency activity in these regions correlating with specific dream contents.

Abstract: Consciousness never fades during waking. However, when awakened from sleep, we sometimes recall dreams and sometimes recall no experiences. Traditionally, dreaming has been identified with rapid eye-movement (REM) sleep, characterized by wake-like, globally 'activated', high-frequency electroencephalographic activity. However, dreaming also occurs in non-REM (NREM) sleep, characterized by prominent low-frequency activity. This challenges our understanding of the neural correlates of conscious experiences in sleep. Using high-density electroencephalography, we contrasted the presence and absence of dreaming in NREM and REM sleep. In both NREM and REM sleep, reports of dream experience were associated with local decreases in low-frequency activity in posterior cortical regions. High-frequency activity in these regions correlated with specific dream contents. Monitoring this posterior 'hot zone' in real time predicted whether an individual reported dreaming or the absence of dream experiences during NREM sleep, suggesting that it may constitute a core correlate of conscious experiences in sleep.

Plasticity in the structure of visual space

Song, C., Haun, A. M., & Tononi, G. (2017). Eneuro, 4(3), 0080-17.2017.

Tests IIT’s claim that the strength of lateral connections between neurons in the visual cortex shapes the experience of spatial relatedness between locations in the visual field.

Abstract: Visual space embodies all visual experiences, yet what determines the topographical structure of visual space remains unclear. Here we test a novel theoretical framework that proposes intrinsic lateral connections in the visual cortex as the mechanism underlying the structure of visual space. The framework suggests that the strength of lateral connections between neurons in the visual cortex shapes the experience of spatial relatedness between locations in the visual field. As such, an increase in lateral connection strength shall lead to an increase in perceived relatedness and a contraction in perceived distance. To test this framework through human psychophysics experiments, we used a Hebbian training protocol in which two-point stimuli were flashed in synchrony at separate locations in the visual field, to strengthen the lateral connections between two separate groups of neurons in the visual cortex. After training, participants experienced a contraction in perceived distance. Intriguingly, the perceptual contraction occurred not only between the two training locations that were linked directly by the changed connections, but also between the outward untrained locations that were linked indirectly through the changed connections. Moreover, the effect of training greatly decreased if the two training locations were too close together or too far apart and went beyond the extent of lateral connections. These findings suggest that a local change in the strength of lateral connections is sufficient to alter the topographical structure of visual space.

Neural correlates of pure presence

Boly, M., Smith, R., Vigueras Borrego, G., Pozuelos, J. P., Allaudin, T., Malinowski, P., & Tononi, G. (2024). bioRxiv, 2024-04. doi 10.1101/2024.04.18.590081.

Shows that states of “pure presence” achieved by long-term meditators correlate with brain states in which cerebral cortex is highly awake (decreased delta activity) but neural activity is broadly reduced (decreased gamma activity), in line with IIT’s predictions.

Abstract: Pure presence (PP) is described in several meditative traditions as an experience of a vast, vivid luminosity devoid of perceptual objects, thoughts, and self. Integrated information theory (IIT) predicts that such vivid experiences may occur when the substrate of consciousness in the cerebral cortex is virtually silent. To assess this prediction, we recorded 256-electrode high-density electroencephalography (hdEEG) in long-term meditators of Vajrayana and Zen traditions who were able to reach PP towards the end of a retreat. Because neural activity is typically associated with increased EEG gamma power, we predicted that PP should be characterized by widespread gamma decreases. For meditators of both traditions, PP was associated with decreased broadband hdEEG power compared to within-meditation mind-wandering, most consistent in the gamma range (30–45 Hz). Source reconstruction indicated that gamma decrease was widespread but especially pronounced in posteromedial cortex. PP broadband power also decreased compared to all other control conditions, such as watching or imagining a movie, active thinking, and open-monitoring. PP delta power (1–4Hz) was also markedly decreased compared to dreamless sleep. PP with minimal perceptual contents or accompanied by a feeling of bliss showed hdEEG signatures close to PP. In contrast, gamma activity increased during phases characterized by rich perceptual contents, such as visualization or mantra recitation. Overall, these results are consistent with PP being a state of vivid consciousness during which the cerebral cortex is highly awake (decreased delta activity) but neural activity is broadly reduced (decreased gamma activity), in line with IIT’s predictions. 

Implications and further readings

Intrinsic meaning, perception, and matching

Mayner, W.G.P., Juel, B.E., Tononi, G. (2024). arXiv, 2024-12

Extends the IIT’s framework to characterize the relationship between the meaning of an experience (i.e., its feeling) and environmental stimuli.

Abstract: Integrated information theory (IIT) argues that the substrate of consciousness is a complex of units that is maximally irreducible. The complex's subsets specify a cause--effect structure, composed of distinctions and their relations, which accounts in full for the quality of experience. The feeling of a specific experience is also its meaning, which is thus defined intrinsically, regardless of whether the experience occurs in a dream or is triggered by processes in the environment. Here we extend IIT's framework to characterize the relationship between intrinsic meaning, extrinsic stimuli, and causal processes in the environment, illustrated using a simple model of a sensory hierarchy. We show that perception should be considered as a structured interpretation, where a stimulus from the environment acts merely as a trigger and the structure is provided by the system’s intrinsic connectivity.  We also show how perceptual differentiation---the diversity of structures triggered by typical sequences of stimuli---quantifies the meaningfulness of different environments to the system. In adaptive systems, this reflects the “matching” between intrinsic meanings and causal features of an environment.

Shannon information, integrated information, and the brain: message and meaning

Zaeemzadeh, A., & Tononi, G. (2024). arXiv:2412.10626. 

Contrasts information as the communication of messages (Shannon information) with information as the intrinsic meaning—that is, feeling—of an experience (integrated information). 

Abstract: Information theory, introduced by Shannon, has been extremely successful and influential as a mathematical theory of communication. Shannon’s notion of information does not consider the meaning of the messages being communicated but only their probability. Even so, computational approaches regularly appeal to “information processing” to study how meaning is encoded and decoded in natural and artificial systems. Here, we contrast Shannon information theory with integrated information theory (IIT), which was developed to account for the presence and properties of consciousness. IIT considers meaning as integrated information and characterizes it as a structure, rather than as a message or code. In principle, IIT’s axioms and postulates allow one to “unfold” a cause–effect structure from a substrate in a state—a structure that fully defines the intrinsic meaning of an experience and its contents. It follows that the communication of information as meaning requires similarity between cause–effect structures of sender and receiver.

The unfathomable richness of seeing

Haun, A. M., & Tononi, G. (2024). PsyArXiv: jmg35. doi: 10.31234/osf.io/jmg35.

Argues that “seeing” is much more than “noticing”—against the claim that visual experience is sparse and its apparent richness illusory.

Abstract: Most experts hold that visual experience is remarkably sparse and its apparent richness is illusory. Indeed, we fail to notice the vast majority of what we think we see, and seem to rely instead on a high-level summary of a visual scene. However, we argue here that seeing is much more than noticing, and is in fact unfathomably rich. We distinguish among three levels of visual phenomenology: a high-level description of a scene based on the categorization of “objects,” an intermediate level composed of “groupings” of simple visual features such as colors, and a base-level visual field composed of “spots” and their spatial relations. We illustrate that it is impossible to see the objects that underlie a high-level description without seeing the groupings that compose them, and we cannot see the groupings without seeing the visual field to which they are bound. We then argue that the way the visual field feels—its spatial extendedness—can only be accounted for by a phenomenal structure composed of innumerable distinctions and relations. It follows that most of what we see has no functional counterpart—it cannot be used, reported, or remembered. And yet we see it.

Of maps and grids

Grasso, M., Haun, A. M., & Tononi, G. (2021). Neuroscience of Consciousness, 2021(2), niab022. 

Shows how phenomenal properties of spatial experience can be accounted for by a grid-like substrate but not by a map-like one, despite their being functional analogs.

Abstract: Neuroscience has made remarkable advances in accounting for how the brain performs its various functions. Consciousness, too, is usually approached in functional terms: the goal is to understand how the brain represents information, accesses that information, and acts on it. While useful for prediction, this functional, information-processing approach leaves out the subjective structure of experience: it does not account for how experience feels. Here, we consider a simple model of how a “grid-like” network meant to resemble posterior cortical areas can represent spatial information and act on it to perform a simple “fixation” function. Using standard neuroscience tools, we show how the model represents topographically the retinal position of a stimulus and triggers eye muscles to fixate or follow it. Encoding, decoding, and tuning functions of model units illustrate the working of the model in a way that fully explains what the model does. However, these functional properties have nothing to say about the fact that a human fixating a stimulus would also “see” it—experience it at a location in space. Using the tools of Integrated Information Theory, we then show how the subjective properties of experienced space—its extendedness—can be accounted for in objective, neuroscientific terms by the “cause-effect structure” specified by the grid-like cortical area. By contrast, a “map-like” network without lateral connections, meant to resemble a pretectal circuit, is functionally equivalent to the grid-like system with respect to representation, action, and fixation but cannot account for the phenomenal properties of space.

Causal reductionism and causal structures

Grasso, M., Albantakis, L., Lang, J. P., & Tononi, G. (2021). Nature Neuroscience, 24(10), 13481355.

Extends the IIT framework to demonstrate that causal reductionism cannot provide a complete and coherent account of ‘what caused what’ and outlines an explicit, operational approach to analyzing causal structures.

Abstract: Causal reductionism is the widespread assumption that there is no room for additional causes once we have accounted for all elementary mechanisms within a system. Due to its intuitive appeal, causal reductionism is prevalent in neuroscience: once all neurons have been caused to fire or not to fire, it seems that causally there is nothing left to be accounted for. Here, we argue that these reductionist intuitions are based on an implicit, unexamined notion of causation that conflates causation with prediction. By means of a simple model organism, we demonstrate that causal reductionism cannot provide a complete and coherent account of ‘what caused what’. To that end, we outline an explicit, operational approach to analyzing causal structures.

What caused what? A quantitative account of actual causation using dynamical causal networks

Albantakis, L., Marshall, W., Hoel, E., & Tononi, G. (2019). Entropy, 21(5), 459.

Extends the IIT framework to determine actual causation in discrete dynamical systems by identifying and quantifying the strength of all actual causes and effects linking two consecutive system states. 

Abstract: Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system’s causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the “what caused what?” question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation.

Evolution of integrated causal structures in animats exposed to environments of increasing complexity

Albantakis, L., Hintze, A., Koch, C., Adami, C., & Tononi, G. (2014). PLoS Computational Biology, 10(12), e1003966.

Uses a simulation of small, adaptive logic-gate networks (“animats”) to demonstrate how evolutionary pressure would favor the development of substrates capable of high integrated information.

Abstract: Natural selection favors the evolution of brains that can capture fitness-relevant features of the environment's causal structure. We investigated the evolution of small, adaptive logic-gate networks (“animats”) in task environments where falling blocks of different sizes have to be caught or avoided in a ‘Tetris-like’ game. Solving these tasks requires the integration of sensor inputs and memory. Evolved networks were evaluated using measures of information integration, including the number of evolved concepts and the total amount of integrated conceptual information. The results show that, over the course of the animats' adaptation, i) the number of concepts grows; ii) integrated conceptual information increases; iii) this increase depends on the complexity of the environment, especially on the requirement for sequential memory. These results suggest that the need to capture the causal structure of a rich environment, given limited sensors and internal mechanisms, is an important driving force for organisms to develop highly integrated networks (“brains”) with many concepts, leading to an increase in their internal complexity.

Development of IIT

The mature formalism of IIT is captured in "IIT 4.0" (Albantakis et al. 2023). For historical reasons, however, we have listed below the most important IIT papers that led up to IIT 4.0. 

From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0

Oizumi, M., Albantakis, L., & Tononi, G. (2014). PLoS Computational Biology, 10(5), e1003588. 

Introduces IIT 3.0.

Abstract: This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as “differences that make a difference” within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes.

Qualia: the geometry of integrated information

Balduzzi, D., & Tononi, G. (2009). PLoS Computational Biology, 5(8), e1000462.

Expands on IIT 2.0.

Abstract: According to the integrated information theory, the quantity of consciousness is the amount of integrated information generated by a complex of elements, and the quality of experience is specified by the informational relationships it generates. This paper outlines a framework for characterizing the informational relationships generated by such systems. Qualia space (Q) is a space having an axis for each possible state (activity pattern) of a complex. Within Q, each submechanism specifies a point corresponding to a repertoire of system states. Arrows between repertoires in Q define informational relationships. Together, these arrows specify a quale—a shape that completely and univocally characterizes the quality of a conscious experience. Φ— the height of this shape—is the quantity of consciousness associated with the experience. Entanglement measures how irreducible informational relationships are to their component relationships, specifying concepts and modes. Several corollaries follow from these premises. The quale is determined by both the mechanism and state of the system. Thus, two different systems having identical activity patterns may generate different qualia. Conversely, the same quale may be generated by two systems that differ in both activity and connectivity. Both active and inactive elements specify a quale, but elements that are inactivated do not. Also, the activation of an element affects experience by changing the shape of the quale. The subdivision of experience into modalities and submodalities corresponds to subshapes in Q. In principle, different aspects of experience may be classified as different shapes in Q, and the similarity between experiences reduces to similarities between shapes. Finally, specific qualities, such as the “redness” of red, while generated by a local mechanism, cannot be reduced to it, but require considering the entire quale. Ultimately, the present framework may offer a principled way for translating qualitative properties of experience into mathematics.

Integrated information in discrete dynamical systems: Motivation and theoretical framework

Balduzzi, D., Tononi, G. (2008). PLoS Computational Biology, 4: e1000091. 

Expands on IIT 2.0.

Abstract: This paper introduces a time- and state-dependent measure of integrated information, φ, which captures the repertoire of causal states available to a system as a whole. Specifically, φ quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) φ varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) φ varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) φ varies as a function of network architecture. High φ values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high φ because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high φ but are inefficient. (iv) In Hopfield networks, φ is low for attractor states and neutral states, but increases if the networks are optimized to achieve tension between local and global interactions. These basic examples appear to match well against neurobiological evidence concerning the neural substrates of consciousness. More generally, φ appears to be a useful metric to characterize the capacity of any physical system to integrate information.

Consciousness as integrated information: A provisional manifesto

Tononi, G. (2008). The Biological Bulletin, 215(3), 216-242. 

Introduces IIT 2.0.

Abstract: The integrated information theory (IIT) starts from phenomenology and makes use of thought experiments to claim that consciousness is integrated information. Specifically: (i) the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements; (ii) the quality of experience is specified by the set of informational relationships generated within that complex. Integrated information (Phi) is defined as the amount of information generated by a complex of elements, above and beyond the information generated by its parts. Qualia space (Q) is a space where each axis represents a possible state of the complex, each point is a probability distribution of its states, and arrows between points represent the informational relationships among its elements generated by causal mechanisms (connections). Together, the set of informational relationships within a complex constitute a shape in Q that completely and univocally specifies a particular experience. Several observations concerning the neural substrate of consciousness fall naturally into place within the IIT framework. Among them are the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the distinct role of different cortical architectures in affecting the quality of experience. Equating consciousness with integrated information carries several implications for our view of nature.

An information integration theory of consciousness

Tononi, G. (2004). BMC Neuroscience, 5, 1-22.

Introduces IIT 1.0.

Abstract: 

Background: Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? 

Presentation of the hypothesis: This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation – the availability of a very large number of conscious experiences; and integration – the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the Φ value of a complex of elements. Φ is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with Φ>0 that is not part of a subset of higher Φ. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex. 

Testing the hypothesis: The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness. 

Implications of the hypothesis: The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts.

Measures of degeneracy and redundancy in biological networks

Tononi, G., Sporns, O., & Edelman, G. M. (1999). Proceedings of the National Academy of Sciences, 96(6), 3257-3262.

Presents the notion of neural degeneracy, which introduces a causal notion of information.

Abstract: 

Degeneracy, the ability of elements that are structurally different to perform the same function, is a prominent property of many biological systems ranging from genes to neural networks to evolution itself. Because structurally different elements may produce different outputs in different contexts, degeneracy should be distinguished from redundancy, which occurs when the same function is performed by identical elements. However, because of ambiguities in the distinction between structure and function and because of the lack of a theoretical treatment, these two notions often are conflated. By using information theoretical concepts, we develop here functional measures of the degeneracy and redundancy of a system with respect to a set of outputs. These measures help to distinguish the concept of degeneracy from that of redundancy and make it operationally useful. Through computer simulations of neural systems differing in connectivity, we show that degeneracy is low both for systems in which each element affects the output independently and for redundant systems in which many elements can affect the output in a similar way but do not have independent effects. By contrast, degeneracy is high for systems in which many different elements can affect the output in a similar way and at the same time can have independent effects. We demonstrate that networks that have been selected for degeneracy have high values of complexity, a measure of the average mutual information between the subsets of a system. These measures promise to be useful in characterizing and understanding the functional robustness and adaptability of biological networks.

Consciousness and complexity

Tononi, G., & Edelman, G. M. (1998). Science, 282(5395), 1846-1851.

Introduces the dynamic core hypothesis, which already incorporates some of the essential ingredients of IIT.

Abstract: 

Conventional approaches to understanding consciousness are generally concerned with the contribution of specific brain areas or groups of neurons. By contrast, it is considered here what kinds of neural processes can account for key properties of conscious experience. Applying measures of neural integration and complexity, together with an analysis of extensive neurological data, leads to a testable proposal—the dynamic core hypothesis—about the properties of the neural substrate of consciousness.

Functional clustering: Identifying strongly interactive brain regions in neuroimaging data

Tononi, G., McIntosh, A. R., Russell, D. P., & Edelman, G. M. (1998). Neuroimage, 7(2), 133-149.

Presents an attempt to identify the borders of a complex in line with the assessment of maxima of integrated information in IIT.

Abstract: 

Brain imaging data are generally used to determine which brain regions are mostactivein an experimental paradigm or in a group of subjects. Theoretical considerations suggest that it would also be of interest to know which set of brain regions are mostinteractivein a given task or group of subjects. A subset of regions that are much more strongly interactive among themselves than with the rest of the brain is called here afunctional cluster.Functional clustering can be assessed by calculating for each subset of brain regions a measure, thecluster index,obtained by dividing the statistical dependence within the subset by that between the subset and rest of the brain. A cluster index value near 1 indicates a homogeneous system, while a high cluster index indicates that a subset of brain regions forms a distinct functional cluster. Within a functional cluster, individual brain regions are ranked at the center or at the periphery according to their statistical dependence with the rest of that cluster. The applicability of this approach has been tested on PET data obtained from normal and schizophrenic subjects performing a set of cognitive tasks. Analysis of the data reveals evidence of functional clustering. A comparative evaluation of which regions are more peripheral or more central suggests distinct differences between the two groups of subjects. We consider the applicability of this analysis to data obtained with imaging modalities offering higher temporal resolution than PET.

A complexity measure for selective matching of signals by the brain

Tononi, G., Sporns, O., & Edelman, G. M. (1996). Proceedings of the National Academy of Sciences, 93(8), 3422-3427.

Presents an attempt to identify the borders of a complex in line with the assessment of maxima of integrated information in IIT.

Abstract: 

We have previously derived a theoretical measure of neural complexity (CN) in an attempt to characterize functional connectivity in the brain. CN measures the amount and heterogeneity of statistical correlations within a neural system in terms of the mutual information between subsets of its units. CN was initially used to characterize the functional connectivity of a neural system isolated from the environment. In the present paper, we introduce a related statistical measure, matching complexity (CM), which reflects the change in CN that occurs after a neural system receives signals from the environment. CM measures how well the ensemble of intrinsic correlations within a neural system fits the statistical structure of the sensory input. We show that CM is low when the intrinsic connectivity of a simulated cortical area is randomly organized. Conversely, CM is high when the intrinsic connectivity is modified so as to differentially amplify those intrinsic correlations that happen to be enhanced by sensory input. When the input is represented by an individual stimulus, a positive value of CM indicates that the limited mutual information between sensory sheets sampling the stimulus and the rest of the brain triggers a large increase in the mutual information between many functionally specialized subsets within the brain. In this way, a complex brain can deal with context and go "beyond the information given."

A measure for brain complexity: relating functional segregation and integration in the nervous system

Tononi, G., Sporns, O., & Edelman, G. M. (1994). Proceedings of the National Academy of Sciences, 91(11), 5033-5037.

Introduces a measure of neural complexity as a first attempt to capture the interplay of information and differentiation as well as compositional aspects of it.

Abstract: 

In brains of higher vertebrates, the functional segregation of local areas that differ in their anatomy and physiology contrasts sharply with their global integration during perception and behavior. In this paper, we introduce a measure, called neural complexity (CN), that captures the interplay between these two fundamental aspects of brain organization. We express functional segregation within a neural system in terms of the relative statistical independence of small subsets of the system and functional integration in terms of significant deviations from independence of large subsets. CN is then obtained from estimates of the average deviation from statistical independence for subsets of increasing size. CN is shown to be high when functional segregation coexists with integration and to be low when the components of a system are either completely independent (segregated) or completely dependent (integrated). We apply this complexity measure in computer simulations of cortical areas to examine how some basic principles of neuroanatomical organization constrain brain dynamics. We show that the connectivity patterns of the cerebral cortex, such as a high density of connections, strong local connectivity organizing cells into neuronal groups, patchiness in the connectivity among neuronal groups, and prevalent reciprocal connections, are associated with high values of CN. The approach outlined here may prove useful in analyzing complexity in other biological domains such as gene regulation and embryogenesis.