IIT Wiki

Integrated Information Theory:
Consciousness and Intrinsic Ontology

Welcome to the IIT Wiki

Integrated information theory (IIT) aims to provide a scientific account of consciousness and its place in nature. This wiki is designed to support you in learning about the IIT methodology, its mathematical formalism, and the intrinsic ontology that follows from the theory. 

If you are new to IIT, we suggest you start here and follow the sequence in the menu. The wiki pages are interlinked with one another, with a glossary, and with academic papers (below).

Like the theory itself, this wiki is a work in progress, and we invite you to help us improve and expand it. At the bottom of the pages, please ask questions or upvote existing questions

IIT has been largely developed at UW–Madison's Center for Sleep & Consciousness, which is also responsible for the content of this wiki.

Receive occasional emails when new content is published. Unsubscribe any time.


Essential IIT Papers

Main articles referenced throughout the Wiki

Essential IIT Papers

Below is a growing list of articles that present IIT in its mature form. They are largely in line with the current mathematical formalism and terminology presented in "IIT 4.0" (Albantakis et al. 2023) and used throughout the IIT Wiki. A complete list of IIT papers (also reflecting earlier iterations of the theory) can be found here.

Theory (IIT 4.0)

Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms

Albantakis, L., Barbosa, L.S., Findlay, G., Grasso, M., Haun, A.M., Marshall, W., Mayner, W.G.P., Zaeemzadeh, A., Boly, M., Juel, B.E., Sasai, S., Fujii., K., David I., Hendren, J., Lang, J.P., Tononi, G. (2023). PLoS Computational Biology, 19(10), e1011465. 

Provides the mature version of the theory with its mathematical formalism.

Abstract: This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions concerning both the presence and the quality of experience, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate formulation of the axioms as postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system’s irreducible cause–effect power, the distinctions and relations specified by a substrate can account for the quality of experience.

Complete paper

A measure for intrinsic information

Barbosa, L. S., Marshall, W., Streipert, S., Albantakis, L., & Tononi, G. (2020). Scientific Reports, 10(1), 18803.

Introduces a unique information measure from the intrinsic perspective of a system, which satisfies IIT’s postulates of existence, intrinsicality, and information.

Abstract: We introduce an information measure that reflects the intrinsic perspective of a receiver or sender of a single symbol, who has no access to the communication channel and its source or target. The measure satisfies three desired properties—causality, specificity, intrinsicality—and is shown to be unique. Causality means that symbols must be transmitted with probability greater than chance. Specificity means that information must be transmitted by an individual symbol. Intrinsicality means that a symbol must be taken as such and cannot be decomposed into signal and noise. It follows that the intrinsic information carried by a specific symbol increases if the repertoire of symbols increases without noise (expansion) and decreases if it does so without signal (dilution). An optimal balance between expansion and dilution is relevant for systems whose elements must assess their inputs and outputs from the intrinsic perspective, such as neurons in a network.

Complete paper

System integrated information

Marshall, W., Grasso, M., Mayner, W.G.P., Zaeemzadeh, A., Barbosa, L.S., Chastain, E., Findlay, G., Sasai, S., Albantakis, L., Tononi, G. (2023). Entropy, 25(2), 334.

Proposes a measure of system integrated information (φs) and explores how this measure is impacted by determinism, degeneracy, and “fault lines” in connectivity.

Abstract: Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause–effect structure unfolded from a maximally irreducible substrate (a Φ-structure). In this work we introduce a definition for the integrated information of a system (𝜑𝑠) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system-integrated information. We then demonstrate how the proposed measure identifies complexes as systems, the 𝜑𝑠 of which is greater than the 𝜑𝑠 of any overlapping candidate systems.

Complete paper

Intrinsic units: Identifying a system's causal grain

Marshall, W., Findlay, G., Albantakis, L., & Tononi, G. (2024). arXiv, 2024-04.

Shows how to identify a system’s intrinsic units and demonstrates that the cause-effect power of a system of macro units can be higher than the cause-effect power of the corresponding micro units.

Abstract: Integrated information theory (IIT) aims to account for the quality and quantity of consciousness in physical terms. According to IIT, a substrate of consciousness must be a system of units that is a maximum of intrinsic, irreducible cause-effect power, quantified by integrated information (𝜑𝑠). Moreover, the grain of each unit must be the one—from micro (finer) to macro (coarser)—that maximizes the system’s intrinsic irreducibility (i.e., maximizes 𝜑𝑠). The units that maximize 𝜑𝑠 are called the intrinsic units of the system. This work extends the mathematical framework of IIT 4.0 to assess cause-effect power at different grains and thereby determine a system’s intrinsic units. Using simple, simulated systems, we show that the cause-effect power of a system of macro units can be higher than the cause-effect power of the corresponding micro units. Two examples highlight specific kinds of macro units, and how each kind can increase cause-effect power. The implications of the framework are discussed in the broader context of IIT, including how it provides a foundation for tests and inferences about consciousness.

Complete paper

Upper bounds for integrated information

Zaeemzadeh, A., & Tononi, G. (2024). PLOS Computational Biology, 20(8), e1012323.

Presents the theoretical upper bound of the integrated information of causal distinctions (φd) and their relations (φr), and thus for structure integrated information (Φ, the sum of φd and φr). 

Abstract: Originally developed as a theory of consciousness, integrated information theory provides a mathematical framework to quantify the causal irreducibility of systems and subsets of units in the system. Specifically, mechanism integrated information quantifies how much of the causal powers of a subset of units in a state, also referred to as a mechanism, cannot be accounted for by its parts. If the causal powers of the mechanism can be fully explained by its parts, it is reducible and its integrated information is zero. Here, we study the upper bound of this measure and how it is achieved. We study mechanisms in isolation, groups of mechanisms, and groups of causal relations among mechanisms. We put forward new theoretical results that show mechanisms that share parts with each other cannot all achieve their maximum. We also introduce techniques to design systems that can maximize the integrated information of a subset of their mechanisms or relations. Our results can potentially be used to exploit the symmetries and constraints to reduce the computations significantly and to compare different connectivity profiles in terms of their maximal achievable integrated information.

Complete paper

Why does space feel the way it does? Towards a principled account of spatial experience

Haun, A.M., Tononi, G. (2019). Entropy, 21(12), 1160.

Accounts for the phenomenal properties of spatial experiences through properties of the cause–effect structure unfolded from a grid-like substrate.

Abstract: There must be a reason why an experience feels the way it does. A good place to begin addressing this question is spatial experience, because it may be more penetrable by introspection than other qualities of consciousness such as color or pain. Moreover, much of experience is spatial, from that of our body to the visual world, which appears as if painted on an extended canvas in front of our eyes. Because it is ‘right there’, we usually take space for granted and overlook its qualitative properties. However, we should realize that a great number of phenomenal distinctions and relations are required for the canvas of space to feel ‘extended’. Here we argue that, to be experienced as extended, the canvas of space must be composed of countless spots, here and there, small and large, and these spots must be related to each other in a characteristic manner through connection, fusion, and inclusion. Other aspects of the structure of spatial experience follow from extendedness: every spot can be experienced as enclosing a particular region, with its particular location, size, boundary, and distance from other spots. We then propose an account of the phenomenal properties of spatial experiences based on integrated information theory (IIT). The theory provides a principled approach for characterizing both the quantity and quality of experience by unfolding the cause–effect structure of a physical substrate. Specifically, we show that a simple simulated substrate of units connected in a grid-like manner yields a cause–effect structure whose properties can account for the main properties of spatial experience. These results uphold the hypothesis that our experience of space is supported by brain areas whose units are linked by a grid-like connectivity. They also predict that changes in connectivity, even in the absence of changes in activity, should lead to a warping of experienced space. To the extent that this approach provides an initial account of phenomenal space, it may also serve as a starting point for investigating other aspects of the quality of experience and their physical correspondents.

Complete paper

Why does time feel the way it does?

Comolatti, R., Grasso, M., & Tononi, G. (Forthcoming). 

Accounts for the phenomenal properties of temporal experiences through properties of the cause–effect structure unfolded from a substrate organized as a directed grid.

Abstract: Why does time feel flowing? Why does every moment feel directed, flowing away from us toward the past? Why do we have an experience of the present as a moment, and why does it feel composed of various smaller moments within it? In this paper, we use the formalism of IIT to account for why time feels flowing. First, we characterize the phenomenology of time, defining the explanandum we aim to account for: a phenomenal structure of distinctions and relations between them that we call phenomenal flow. Then, we propose an account of phenomenal flow in physical terms in a way that is principled and testable, linking time phenomenology to a particular physical substrate (directed 1D grids) and to plausible circuits in the brain. IIT establishes an explanatory identity between the properties of experience and the properties of the cause–effect structure specified by its substrate. Applied to the experience of time, we show how the properties of phenomenal flow—moments, the way they feel directed, the way they relate with each other, the way they compose the present, among others—can be accounted for in physical terms by the cause–effect structure specified by directed grids: by the causal distinctions that compose the cause–effect structure and the specific ways they relate to compose a flow.

Intrinsic meaning, perception, and matching

Mayner, W.G.P., Juel, B.E., Tononi, G. (forthcoming). 

Extends the IIT’s framework to characterize the relationship between the meaning of experience (i.e., its feeling) and environmental stimuli.

Abstract: Integrated information theory (IIT) argues that the substrate of consciousness is a complex of units that is maximally irreducible. The complex's subsets specify a cause--effect structure, composed of distinctions and their relations, which accounts in full for the quality of experience. The feeling of a specific experience is also its meaning, which is thus defined intrinsically, regardless of whether the experience occurs in a dream or is triggered by processes in the environment. Here we extend IIT's framework to characterize the relationship between intrinsic meaning, extrinsic stimuli, and causal processes in the environment, illustrated using a simple model of a sensory hierarchy. We show that perception should be considered as a structured interpretation, where a stimulus from the environment acts merely as a trigger and the structure is provided by the system’s intrinsic connectivity.  We also show how perceptual differentiation---the diversity of structures triggered by typical sequences of stimuli---quantifies the meaningfulness of different environments to the system. In adaptive systems, this reflects the “matching” between intrinsic meanings and causal features of an environment.

Empirical Validation of IIT

Integrated information theory: from consciousness to its physical substrate

Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Nature Reviews Neuroscience, 17(7), 450461.

Reviews how IIT explains several aspects of the relationship between consciousness and the brain, offers various counterintuitive empirical predictions, and provides a path to assessing consciousness in non-communicative patients.

Abstract: In this Opinion article, we discuss how integrated information theory accounts for several aspects of the relationship between consciousness and the brain. Integrated information theory starts from the essential properties of phenomenal experience, from which it derives the requirements for the physical substrate of consciousness. It argues that the physical substrate of consciousness must be a maximum of intrinsic cause–effect power and provides a means to determine, in principle, the quality and quantity of experience. The theory leads to some counterintuitive predictions and can be used to develop new tools for assessing consciousness in non-communicative patients.

Complete paper

Breakdown of cortical effective connectivity during sleep

Massimini, M., Ferrarelli, F., Huber, R., Esser, S. K., Singh, H., & Tononi, G. (2005). Science, 309(5744), 2228–2232. 

Uses a causal approach by combining transcranial magnetic stimulation and high-density EEG to show that the fading of consciousness during certain stages of sleep may be related to a breakdown in cortical effective connectivity. 

Abstract: When we fall asleep, consciousness fades yet the brain remains active. Why is this so? To investigate whether changes in cortical information transmission play a role, we used transcranial magnetic stimulation together with high-density electroencephalography and asked how the activation of one cortical area (the premotor area) is transmitted to the rest of the brain. During quiet wakefulness, an initial response (∼15 milliseconds) at the stimulation site was followed by a sequence of waves that moved to connected cortical areas several centimeters away. During non–rapid eye movement sleep, the initial response was stronger but was rapidly extinguished and did not propagate beyond the stimulation site. Thus, the fading of consciousness during certain stages of sleep may be related to a breakdown in cortical effective connectivity.

Complete paper

Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness

Ferrarelli, F., Massimini, M., Sarasso, S., Casali, A., Riedner, B. A., Angelini, G., ... & Pearce, R. A. (2010). Proceedings of the National Academy of Sciences, 107(6), 26812686. 

Uses transcranial magnetic stimulation together with high-density EEG to show that the breakdown of cortical effective connectivity may underlie loss of consciousness induced by pharmacologic agents. 

Abstract: By employing transcranial magnetic stimulation (TMS) in combination with high-density electroencephalography (EEG), we recently reported that cortical effective connectivity is disrupted during early non-rapid eye movement (NREM) sleep. This is a time when subjects, if awakened, may report little or no conscious content. We hypothesized that a similar breakdown of cortical effective connectivity may underlie loss of consciousness (LOC) induced by pharmacologic agents. Here, we tested this hypothesis by comparing EEG responses to TMS during wakefulness and LOC induced by the benzodiazepine midazolam. Unlike spontaneous sleep states, a subject’s level of vigilance can be monitored repeatedly during pharmacological LOC. We found that, unlike during wakefulness, wherein TMS triggered responses in multiple cortical areas lasting for >300 ms, during midazolam-induced LOC, TMS-evoked activity was local and of shorter duration. Furthermore, a measure of the propagation of evoked cortical currents (significant current scattering, SCS) could reliably discriminate between consciousness and LOC. These results resemble those observed in early NREM sleep and suggest that a breakdown of cortical effective connectivity may be a common feature of conditions characterized by LOC. Moreover, these results suggest that it might be possible to use TMS-EEG to assess consciousness during anesthesia and in pathological conditions, such as coma, vegetative state, and minimally conscious state.

Complete paper

A perturbational approach for evaluating the brain's capacity for consciousness

Massimini, M., Boly, M., Casali, A., Rosanova, M., & Tononi, G. (2009). Progress in Brain Research, 177, 201214.

Starts from IIT-derived principles to propose an empirical approach to measuring consciousness whereby direct cortical perturbations and recordings are used to gauge the co-occurrence of integration and differentiation in thalamocortical circuits.  

Abstract: How do we evaluate a brain's capacity to sustain conscious experience if the subject does not manifest purposeful behaviour and does not respond to questions and commands? What should we measure in this case? An emerging idea in theoretical neuroscience is that what really matters for consciousness in the brain is not activity levels, access to sensory inputs or neural synchronization per se, but rather the ability of different areas of the thalamocortical system to interact causally with each other to form an integrated whole. In particular, the information integration theory of consciousness (IITC) argues that consciousness is integrated information and that the brain should be able to generate consciousness to the extent that it has a large repertoire of available states (information), yet it cannot be decomposed into a collection of causally independent subsystems (integration). To evaluate the ability to integrate information among distributed cortical regions, it may not be sufficient to observe the brain in action. Instead, it is useful to employ a perturbational approach and examine to what extent different regions of the thalamocortical system can interact causally (integration) and produce specific responses (information). Thanks to a recently developed technique, transcranial magnetic stimulation and high-density electroencephalography (TMS/hd-EEG), one can record the immediate reaction of the entire thalamocortical system to controlled perturbations of different cortical areas. In this chapter, using sleep as a model of unconsciousness, we show that TMS/hd-EEG can detect clear-cut changes in the ability of the thalamocortical system to integrate information when the level of consciousness fluctuates across the sleep-wake cycle. Based on these results, we discuss the potential applications of this novel technique to evaluate objectively the brain's capacity for consciousness at the bedside of brain-injured patients.

Complete paper

A theoretically based index of consciousness independent of sensory processing and behavior

Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). Science Translational Medicine, 5(198), 198ra105198ra105.

Introduces and tests an index of the level of consciousness called the perturbational complexity index (PCI), calculated by perturbing the cortex with transcranial magnetic stimulation and then compressing the spatiotemporal pattern of the response to measure its algorithmic complexity.

Abstract: One challenging aspect of the clinical assessment of brain-injured, unresponsive patients is the lack of an objective measure of consciousness that is independent of the subject’s ability to interact with the external environment. Theoretical considerations suggest that consciousness depends on the brain’s ability to support complex activity patterns that are, at once, distributed among interacting cortical areas (integrated) and differentiated in space and time (information-rich). We introduce and test a theory-driven index of the level of consciousness called the perturbational complexity index (PCI). PCI is calculated by (i) perturbing the cortex with transcranial magnetic stimulation (TMS) to engage distributed interactions in the brain (integration) and (ii) compressing the spatiotemporal pattern of these electrocortical responses to measure their algorithmic complexity (information). We test PCI on a large data set of TMS-evoked potentials recorded in healthy subjects during wakefulness, dreaming, nonrapid eye movement sleep, and different levels of sedation induced by anesthetic agents (midazolam, xenon, and propofol), as well as in patients who had emerged from coma (vegetative state, minimally conscious state, and locked-in syndrome). PCI reliably discriminated the level of consciousness in single individuals during wakefulness, sleep, and anesthesia, as well as in patients who had emerged from coma and recovered a minimal level of consciousness. PCI can potentially be used for objective determination of the level of consciousness at the bedside.

Complete paper

The neural correlates of dreaming

Siclari, F., Baird, B., Perogamvros, L., Bernardi, G., LaRocque, J. J., Riedner, B., Boly, M., Postle, B., & Tononi, G. (2017). Nature Neuroscience, 20(6), 872878.

Finds that in both NREM and REM sleep, reports of dream experience are associated with local decreases in low-frequency activity in posterior cortical regions, with high-frequency activity in these regions correlating with specific dream contents.

Abstract: Consciousness never fades during waking. However, when awakened from sleep, we sometimes recall dreams and sometimes recall no experiences. Traditionally, dreaming has been identified with rapid eye-movement (REM) sleep, characterized by wake-like, globally 'activated', high-frequency electroencephalographic activity. However, dreaming also occurs in non-REM (NREM) sleep, characterized by prominent low-frequency activity. This challenges our understanding of the neural correlates of conscious experiences in sleep. Using high-density electroencephalography, we contrasted the presence and absence of dreaming in NREM and REM sleep. In both NREM and REM sleep, reports of dream experience were associated with local decreases in low-frequency activity in posterior cortical regions. High-frequency activity in these regions correlated with specific dream contents. Monitoring this posterior 'hot zone' in real time predicted whether an individual reported dreaming or the absence of dream experiences during NREM sleep, suggesting that it may constitute a core correlate of conscious experiences in sleep.

Complete paper

Plasticity in the structure of visual space

Song, C., Haun, A. M., & Tononi, G. (2017). Eneuro, 4(3), 0080-17.2017.

Tests IIT’s claim that the strength of lateral connections between neurons in the visual cortex shapes the experience of spatial relatedness between locations in the visual field.

Abstract: Visual space embodies all visual experiences, yet what determines the topographical structure of visual space remains unclear. Here we test a novel theoretical framework that proposes intrinsic lateral connections in the visual cortex as the mechanism underlying the structure of visual space. The framework suggests that the strength of lateral connections between neurons in the visual cortex shapes the experience of spatial relatedness between locations in the visual field. As such, an increase in lateral connection strength shall lead to an increase in perceived relatedness and a contraction in perceived distance. To test this framework through human psychophysics experiments, we used a Hebbian training protocol in which two-point stimuli were flashed in synchrony at separate locations in the visual field, to strengthen the lateral connections between two separate groups of neurons in the visual cortex. After training, participants experienced a contraction in perceived distance. Intriguingly, the perceptual contraction occurred not only between the two training locations that were linked directly by the changed connections, but also between the outward untrained locations that were linked indirectly through the changed connections. Moreover, the effect of training greatly decreased if the two training locations were too close together or too far apart and went beyond the extent of lateral connections. These findings suggest that a local change in the strength of lateral connections is sufficient to alter the topographical structure of visual space.

Complete paper

Neural correlates of pure presence

Boly, M., Smith, R., Vigueras Borrego, G., Pozuelos, J. P., Allaudin, T., Malinowski, P., & Tononi, G. (2024). bioRxiv, 2024-04. doi 10.1101/2024.04.18.590081.

Shows that states of “pure presence” achieved by long-term meditators correlate with brain states in which cerebral cortex is highly awake (decreased delta activity) but neural activity is broadly reduced (decreased gamma activity), in line with IIT’s predictions.

Abstract: Pure presence (PP) is described in several meditative traditions as an experience of a vast, vivid luminosity devoid of perceptual objects, thoughts, and self. Integrated information theory (IIT) predicts that such vivid experiences may occur when the substrate of consciousness in the cerebral cortex is virtually silent. To assess this prediction, we recorded 256-electrode high-density electroencephalography (hdEEG) in long-term meditators of Vajrayana and Zen traditions who were able to reach PP towards the end of a retreat. Because neural activity is typically associated with increased EEG gamma power, we predicted that PP should be characterized by widespread gamma decreases. For meditators of both traditions, PP was associated with decreased broadband hdEEG power compared to within-meditation mind-wandering, most consistent in the gamma range (30–45 Hz). Source reconstruction indicated that gamma decrease was widespread but especially pronounced in posteromedial cortex. PP broadband power also decreased compared to all other control conditions, such as watching or imagining a movie, active thinking, and open-monitoring. PP delta power (1–4Hz) was also markedly decreased compared to dreamless sleep. PP with minimal perceptual contents or accompanied by a feeling of bliss showed hdEEG signatures close to PP. In contrast, gamma activity increased during phases characterized by rich perceptual contents, such as visualization or mantra recitation. Overall, these results are consistent with PP being a state of vivid consciousness during which the cerebral cortex is highly awake (decreased delta activity) but neural activity is broadly reduced (decreased gamma activity), in line with IIT’s predictions. 

Complete paper

Consciousness and sleep

Tononi, G., Boly, M., & Cirelli, C. (2024). Neuron, 112(10), 15681594.

Reviews the neurophysiological differences between dreaming and dreamless sleep, shedding light on the substrate of consciousness.

Abstract: Sleep is a universal, essential biological process. It is also an invaluable window on consciousness. It tells us that consciousness can be lost but also that it can be regained, in all its richness, when we are disconnected from the environment and unable to reflect. By considering the neurophysiological differences between dreaming and dreamless sleep, we can learn about the substrate of consciousness and understand why it vanishes. We also learn that the ongoing state of the substrate of consciousness determines the way each experience feels regardless of how it is triggered—endogenously or exogenously. Dreaming consciousness is also a window on sleep and its functions. Dreams tell us that the sleeping brain is remarkably lively, recombining intrinsic activation patterns from a vast repertoire, freed from the requirements of ongoing behavior and cognitive control.

Complete paper

Implications of IIT 

Only what exists can cause: An intrinsic view of free will

Tononi, G., Albantakis, A., Boly, M., Cirelli, C., Koch, C. (2022). arXiv: 2206.02069. 

Argues, based on IIT’s intrinsic powers ontology, that we—not our neurons or atoms—are the true causes of our willed actions, for which we bear true responsibility.

Abstract: This essay addresses the implications of integrated information theory (IIT) for free will. IIT is a theory of what consciousness is and what it takes to have it. According to IIT, the presence of consciousness is accounted for by a maximum of cause-effect power in the brain. Moreover, the way specific experiences feel is accounted for by how that cause-effect power is structured. If IIT is right, we do have free will in the fundamental sense: we have true alternatives, we make true decisions, and we - not our neurons or atoms - are the true cause of our willed actions and bear true responsibility for them. IIT's argument for true free will hinges on the proper understanding of consciousness as true existence, as captured by its intrinsic powers ontology: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause.

Complete paper

Dissociating artificial intelligence from artificial consciousness

Findlay, G., Marshall, W., Albantakis, L., David, I., Mayner, W.G.P., Koch, C., Tononi, G. (Forthcoming).

Challenges computational-functionalist approaches to consciousness by using the IIT formalism to demonstrate that it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience.

Abstract: Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, having the same cognitive abilities, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which---a basic stored-program computer---simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.

Evolution of integrated causal structures in animats exposed to environments of increasing complexity

Albantakis, L., Hintze, A., Koch, C., Adami, C., & Tononi, G. (2014). PLoS Computational Biology, 10(12), e1003966.

Uses a simulation of small, adaptive logic-gate networks (“animats”) to demonstrate how evolutionary pressure would favor the development of substrates capable of high integrated information.

Abstract: Natural selection favors the evolution of brains that can capture fitness-relevant features of the environment's causal structure. We investigated the evolution of small, adaptive logic-gate networks (“animats”) in task environments where falling blocks of different sizes have to be caught or avoided in a ‘Tetris-like’ game. Solving these tasks requires the integration of sensor inputs and memory. Evolved networks were evaluated using measures of information integration, including the number of evolved concepts and the total amount of integrated conceptual information. The results show that, over the course of the animats' adaptation, i) the number of concepts grows; ii) integrated conceptual information increases; iii) this increase depends on the complexity of the environment, especially on the requirement for sequential memory. These results suggest that the need to capture the causal structure of a rich environment, given limited sensors and internal mechanisms, is an important driving force for organisms to develop highly integrated networks (“brains”) with many concepts, leading to an increase in their internal complexity.

Complete paper

What caused what? A quantitative account of actual causation using dynamical causal networks

Albantakis, L., Marshall, W., Hoel, E., & Tononi, G. (2019). Entropy, 21(5), 459.

Extends the IIT framework to determine actual causation in discrete dynamical systems by identifying and quantifying the strength of all actual causes and effects linking two consecutive system states. 

Abstract: Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system’s causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the “what caused what?” question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation.

Complete paper

Causal reductionism and causal structures

Grasso, M., Albantakis, L., Lang, J. P., & Tononi, G. (2021). Nature Neuroscience, 24(10), 13481355.

Extends the IIT framework to demonstrate that causal reductionism cannot provide a complete and coherent account of ‘what caused what’ and outlines an explicit, operational approach to analyzing causal structures.

Abstract: Causal reductionism is the widespread assumption that there is no room for additional causes once we have accounted for all elementary mechanisms within a system. Due to its intuitive appeal, causal reductionism is prevalent in neuroscience: once all neurons have been caused to fire or not to fire, it seems that causally there is nothing left to be accounted for. Here, we argue that these reductionist intuitions are based on an implicit, unexamined notion of causation that conflates causation with prediction. By means of a simple model organism, we demonstrate that causal reductionism cannot provide a complete and coherent account of ‘what caused what’. To that end, we outline an explicit, operational approach to analyzing causal structures.

Complete paper

Other Essential Readings

Consciousness and the fallacy of misplaced objectivity 

Ellia, F., Hendren, J., Grasso, M., Kozma, C., Mindt, G., Lang, J.P., Haun, A.M., Albantakis, L., Boly, M., and Tononi, G. (2021). Neuroscience of Consciousness, 2021(2), niab032. 

Argues that a science of consciousness needs to account for the subjective properties of experience, not for its behavioral, functional, or neural correlates

Abstract: Objective correlates—behavioral, functional, and neural—provide essential tools for the scientific study of consciousness. But reliance on these correlates should not lead to the ‘fallacy of misplaced objectivity’: the assumption that only objective properties should and can be accounted for objectively through science. Instead, what needs to be explained scientifically is what experience is intrinsically—its subjective properties—not just what we can do with it extrinsically. And it must be explained; otherwise the way experience feels would turn out to be magical rather than physical. We argue that it is possible to account for subjective properties objectively once we move beyond cognitive functions and realize what experience is and how it is structured. Drawing on integrated information theory, we show how an objective science of the subjective can account, in strictly physical terms, for both the essential properties of every experience and the specific properties that make particular experiences feel the way they do.

Complete paper

The unfathomable richness of seeing

Haun, A. M., & Tononi, G. PsyArXiv: jmg35. doi: 10.31234/osf.io/jmg35.

Argues that “seeing” is much more than “noticing”—against the claim that visual experience is sparse and its apparent richness illusory.

Abstract: Most experts hold that visual experience is remarkably sparse and its apparent richness is illusory. Indeed, we fail to notice the vast majority of what we think we see, and seem to rely instead on a high-level summary of a visual scene. However, we argue here that seeing is much more than noticing, and is in fact unfathomably rich. We distinguish among three levels of visual phenomenology: a high-level description of a scene based on the categorization of “objects,” an intermediate level composed of “groupings” of simple visual features such as colors, and a base-level visual field composed of “spots” and their spatial relations. We illustrate that it is impossible to see the objects that underlie a high-level description without seeing the groupings that compose them, and we cannot see the groupings without seeing the visual field to which they are bound. We then argue that the way the visual field feels—its spatial extendedness—can only be accounted for by a phenomenal structure composed of innumerable distinctions and relations. It follows that most of what we see has no functional counterpart—it cannot be used, reported, or remembered. And yet we see it.

Complete paper

Shannon information, integrated information, and the brain: message and meaning. 

Zaeemzadeh, A., Tononi, G. (forthcoming). 

Contrasts information as the communication of messages (Shannon information) with information as the intrinsic meaning—that is, feeling—of an experience (integrated information). 

Abstract: Information theory, introduced by Shannon, has been extremely successful and influential as a mathematical theory of communication. Shannon’s notion of information does not consider the meaning of the messages being communicated but only their probability. Even so, computational approaches regularly appeal to “information processing” to study how meaning is encoded and decoded in natural and artificial systems. Here, we contrast Shannon information theory with integrated information theory (IIT), which was developed to account for the presence and properties of consciousness. IIT considers meaning as integrated information and characterizes it as a structure, rather than as a message or code. In principle, IIT’s axioms and postulates allow one to “unfold” a cause–effect structure from a substrate in a state—a structure that fully defines the intrinsic meaning of an experience and its contents. It follows that the communication of information as meaning requires similarity between cause–effect structures of sender and receiver.

Of maps and grids

Grasso, M., Haun, A. M., & Tononi, G. (2021). Neuroscience of Consciousness, 2021(2), niab022. 

Shows how phenomenal properties of spatial experience can be accounted for by a grid-like substrate but not by a map-like one, despite their being functional analogs.

Abstract: Neuroscience has made remarkable advances in accounting for how the brain performs its various functions. Consciousness, too, is usually approached in functional terms: the goal is to understand how the brain represents information, accesses that information, and acts on it. While useful for prediction, this functional, information-processing approach leaves out the subjective structure of experience: it does not account for how experience feels. Here, we consider a simple model of how a “grid-like” network meant to resemble posterior cortical areas can represent spatial information and act on it to perform a simple “fixation” function. Using standard neuroscience tools, we show how the model represents topographically the retinal position of a stimulus and triggers eye muscles to fixate or follow it. Encoding, decoding, and tuning functions of model units illustrate the working of the model in a way that fully explains what the model does. However, these functional properties have nothing to say about the fact that a human fixating a stimulus would also “see” it—experience it at a location in space. Using the tools of Integrated Information Theory, we then show how the subjective properties of experienced space—its extendedness—can be accounted for in objective, neuroscientific terms by the “cause-effect structure” specified by the grid-like cortical area. By contrast, a “map-like” network without lateral connections, meant to resemble a pretectal circuit, is functionally equivalent to the grid-like system with respect to representation, action, and fixation but cannot account for the phenomenal properties of space.

Complete paper

Acknowledgments

The current version of the IIT Wiki was created primarily by Jeremiah Hendren, Matteo Grasso, and Bjørn Erik Juel, in close consultation with Giulio Tononi. The content builds especially on Tononi's book On Being (forthcoming) and the many theoretical and empirical IIT papers written by researchers at the UW–Madison Center for Sleep and Consciousness over the last two decades.

The IIT Wiki was made possible in part through the support of the Templeton World Charity Foundation. IIT research more generally has been supported by the Templeton World Charity Foundation, the Distinguidhed Chair in Consciousness Science at UW–Madison, the David P. White Chair in Sleep Medicine at UW–Madison, the Tiny Blue Dot Foundation, the National Institutes of Health, the Natural Science and Engineering Research Council of Canada, the Paul G. Allen Family Foundation, and the McDonnell Foundation, among others.