Integrated information theory (IIT) is not just a theory of consciousness but also an intrinsic ontology. This framework differs significantly from the prevailing approaches within psychology and neuroscience—and in fact within science at large—because it fully incorporates human experience into its premises and methodology.
This part of the IIT Wiki is under construction. For the time being, below is a list of headings that indicate future content, with a few links to relevant published works.
Please see Findlay et al. (Forthcoming):
"Developments in machine learning and computing power suggest that artificial general intelligence may be within reach. This raises the question of artificial consciousness: if a computer were functionally equivalent to a human, having the same cognitive abilities, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and its essential properties, translates them into measurable quantities, can be validated on humans, and can be extrapolated to any physical system. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which—a basic stored-program computer—simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent; (ii) that this conclusion applies no matter how one ‘black-boxes’ the computer’s units; and (iii) that even certain Turing-complete systems, which could theoretically pass the Turing test and simulate a human brain in detail, would be negligibly conscious."
Findlay, G., Marshall, W., David, I., Albantakis, L., Mayner, W., Koch, C., and Tononi, G. Forthcoming. Dissociating Artificial Intelligence from Artificial Consciousness.Please see Mayner, Juel, and Tononi (Forthcoming):
"Here, we extend the integrated information theory of consciousness to assess how intrinsic meanings are triggered by extrinsic stimuli. Using simple simulated systems, we show that perception is a structured interpretation, triggered by a stimulus but provided by a system’s intrinsic connectivity. We then show that the “matching” between a system and an environment can be measured by assessing the diversity of intrinsic meanings triggered by typical sequences of stimuli. This approach offers a way of understanding how the meaning of an experience, which is necessarily intrinsic to the subject, can refer to extrinsic entities or processes."
Mayner, W., Juel, B., and Tononi, G. Forthcoming. Meaning and matching: quantifying how the structure of experience matches the environment.Please see Only what exists can cause: An intrinsic view of free will:
"If IIT is right, we do have free will in the fundamental sense: we have true alternatives, we make true decisions, and we—not our neurons or atoms—are the true cause of our willed actions and bear true responsibility for them. IIT's argument for true free will hinges on the proper understanding of consciousness as true existence, as captured by its intrinsic powers ontology: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause."
Tononi, G., Albantakis, L., Boly, M., Cirelli, C., & Koch, C. (2022). Only what exists can cause: An intrinsic view of free will. arXiv preprint arXiv:2206.02069.