On Sentience and Substituion

Point 1 - On Thessian Spectrum of Consciousness

  • Argument I am aware of: In particular on AI/AGI sentience, thought experiment is put forward that AGI sentience must be feasible because human cybernetic migration would result in a sentient being. That is, if a human can gradually replace each part of their brain with non-organic components and result in still being sentient after fully replaced. Such implies that a being made from the same components from the beginning must also be sentient
    • Supporting this is the claim that the human's sentience does not change throughout this setup process
    • Supporting this is the claim that we cannot provide a valid description of reduced sentience in each step of component replacement in the human
  • My position is that at the very least this line of reasoning is flawed
    • In the empiricist tabula rasa perspective, 

Point 2 - On Decomposability of Consciousness as an Approach to Substitution/ Simulation

  • Argument I am aware of:
    • Electronic digital hardware can perform equivalent computation to neurons and so that implies that it is feasible to replace a full human brain by replacing every neuron with an electronic equivalent. This results in an equivalently sentient brain because the components are equivalent
  • My position is that at the very least that this makes some false assumptions:
    • While neurons are similar to electronic digital components, there are known dissimilarities too (ref, roboticist describes additional features)
    • Even if these additional features are also simulated, I expect the underlying physical form to expose more properties at different scales (ie: we may view a given electronic component as equivalent to a neuron at a given scale, but I feel this is unlikely to hold at arbitratry size and time scales.
    • My position is that the exercise of substitution on this approach is to additively select observed features of the brain from the bottom up, and that this is flawed because bottom up approaches miss emergent properties at different scales which lead to incomplete simulations.
    • Additionally this approach is subject to the exploding magnitude involved not only with the count of components, but the number of possible interactions across complex dimensions.
    • To enumerate the complexities
      • Time scales:
        • How does the biological component evolve over time and exist at different time scales as compared to the digital component? Are these the same? These are at least not the same in the sense that excluding certain properties seen as deficiencies or degradations are the driver for substitution in the first place.
        • In this approach we are playing the game of selecting properties of the biological neurological brain to populate in different time scales including defining how properties evolve through the transition across timescales. Given that, we cannot generally maintain the assumption that equivalent components will result in an equivalent outcome if we are introducing divergence at different time scales. We cannot assume these alterations have a null effect on the final simulated result
      • Size scales:
        • Is a component of the brain (eg: a lobe) equivalent to an equivalent structure based on electronic digital neurons? To what extent is the behavior of the whole influenced by the physical form of the parts? To what extent can we rule out emergent properties based on underlying physical properties?
        • Equivalence is based on the smallest unit where emergent properties can be known not to exist, which in general is the smallest unit that is indistinguishable from the simulated substitute. This is because emergent properties can arise from the multiplication of any subtle physical property over a larger scale.
        • Even if complete equivalence is not the goal, equivalence of any particular property, such as sentience or the supporting properties of sentience, is subject to the above
        • If it can be known that either (a) a given property does not result in any emergent properties at larger scales, or that (b) a given property does not influence any emergent properties required for the target property of sentience, then in that case the physical property and the corresponding component can be substituted
        • But given that, following the approach of substitution would require either (a) ruling out all emergent properties or (b) identifying the specific properties involved in sentience
        • I hold the position that both of the above are not possible
      • Permutation of interactions across scales: 
        • A complete simulation will produce an equivalent outcome for each interaction that occurs within the subject. As a prerequisite to achieving this, in the additive approach, we would need to enumerate the interactions we seek parity on
        • The number of relevant scales can be defined by the distance between each scale in scope. The distance between each scale in scope is the size of the smallest unit - that is, the addition of a single unit has the potential to introduce emergent properties as enter us into a new canonical scale.
          • After we enumerate the scales that are relevant (equivalent to the number of indistinguishable units making up the whole in the case that no intermediate component provide the indistinguishable equivlance property), we must also permute since interactions can occur across any arbitrary pair of scales - it is not generally the case that the participants in a given interaction are bound by some distance between their time scales. (though, scale separation theory provides some guidelines where this may be untrue to an extent)
    • The counter to all of this is that the goal of neuron substitution is not to create an exact perfect simulation of the brain, but if it is the case that we are targeting a different whole, why should we assume that the property of sentience would be preserved just because a low-level component is substituted with an 'equivalent'?

Point 3 - On the Quantification of Sentience

  • Argument that I am aware of : Three chickens to one human means killing three chickens is equivalent to killing one human - does any such formula exist?
  • My position is that this is nonsensical since a quantified unit implies (a) an atomic kernel (eg: the smallest unit of sentience), (b) regularity as the unit is multiplied (eg: 2 units of sentience = twice as sentient)
  • But instead while we have a concept of sentience and a vague agreement that sentience may be a spectrum, we have nowhere close to a specific atomic kernel and I do not believe that multiplication of that kernel would result in predictable increases in observed sentience.