• 0 Posts
  • 7 Comments
Joined 9 months ago
cake
Cake day: March 8th, 2025

help-circle
  • -I think it is likely that consciousness emerges on the aggregate macro-level from processes that are simple on the micro-level. Such phenomena do lend themselves to be described or indeed understood best with statistics.

    How’s that?

    If it’s not defined and we have no way to measure the activity that would allow us to even create a model, how could statistics be used?

    I don’t forsee the tech needed to record an entire brain’s electrical information including it’s constantly changing state, every neuron, and every connection, (all architecture), including it’s myelination, while the human is alive - happening any time in the foreseeable future.

    Not only that. Then interpreting any statistical prediction in a real output of behavior or “experience”.

    Not everything can be measured. An inability to measure something does not mean it’s not real and rooted in the material world.

    We know enough to know that consciousness/awareness or any other definition you want to give it, and mental processing occurs in the brain.

    We can’t measure it at a whole brain level . But we know that’s where it’s happening and that it’s a product of biological phenomena.

    I’m sorry if this sounds rude but LLMs are not self updating systems. They are nothing like human processing. They don’t loop in the ways biology does. They don’t constantly change. They have fixed algorithms.

    They can’t be. When parameters are too open with prediction models they just become nonsense.

    This has been demonstrated many times.

    Also consciousness in animals is supported by lots of research in neuroscience.

    Even in the tiny flatworm.

    It’s not unique in humans. And it appears to exist in animals.

    The complexity would obviously vary based on biological limitations. But it appears to be, at least in part, a product of sensory processing.

    Which is often why you find it being referred to as ,“awareness” or “self awareness”.

    Awareness of self is necessary for any organism to distinguish itself from its environment and act accordingly. To be able to predict outcomes and act accordingly. To even know what can be consumed or mated with. Awareness of environment creates approach and avoidance emotions/behaviors.

    Of course this is one of many theories regarding what consciousness is. But this one seems like a pretty solid description and explains why it would exist in animals.


  • Well that’s not my interpretation. Consciousness arises from understanding. True understanding. Not stimulus in- behavior out.

    Consciousness is not a simple exchange or matching task. Which is what the Chinese room illustrates.

    There is more to it.

    The Chinese room is modern LLMs.

    Human brains are altered by every stimulus. Physically they are constantly changing at the neuron level.

    The way inhibitory neurons work … It does not work in a way that (at present) can be predicted very accurately beyond a single or small number of neurons.

    As I like to say. Every moment the brain is updated biologically. Biological changes. Connections weakened, strengthened, created, destroyed.

    This happens constantly.

    You can’t use statistics to predict these kind of events.

    Although the neuro definition of “consciousness” is debated. It is generally considered “awareness”.

    It’s something that is a product of many processes in the brain.

    And we haven’t even touched on brain occillations and how they impact cognitive functions. Brain occillations are heavily tied to consciousness/awareness. They synch up processes and regulate frequency of neuron firing.
    They gatekeep stimuli effects as well.

    The brain is so unbelievably complicated. Research on ERPs are the best we have for predicting some specific brain spikes of cognitive activities.

    You may find the research on it to be less than where you think it is.

    Neuroscience knowledge is far below what most people think it is at (I blame click bait articles).

    However it’s still an interesting area so here is the wiki.

    https://en.wikipedia.org/wiki/Event-related_potential



  • I’m familiar with Chinese room and yes that’s exactly what I was trying to infer with my example of how a video looks like a person. Acts like a person. But is not a person. I didn’t want to go into the Chinese room experiment but that was what I was thinking of.

    The heuristics that humans use are not really like the probability statistics that learning models use. The models use probability cut offs. We use incredibly error prone shortcuts. They aren’t really “estimates” in the statistical way. They are biases in attention and reasoning.

    I enjoyed your speculating about the use of analog for processing closer to real humans vs virtual.

    I think you are partially correct because it’s closer to biology. As you said. But it also can’t change. Which is not like biology. 🤷

    Humans don’t actually compute real probability. In fact humans are so poor at true statistical probability, due to our biases and heuristics, it’s actually amazing that any human was able to break free from that hard wired method and discover the true mathematical way of calculating probability.

    It quite literally goes against human nature. By which I mean brains are not designed to deal with probability that way.

    We actually have trouble with truly understanding anything besides. “Very likely and basically assured” and “very unlikely and basically like no chance”.

    We round percentages to one of those two categories when we think about them. (I’m simplifying but you get what I’m saying,)

    This is why people constantly complain that weather predictions are wrong. 70% chance of rain means it certainly will rain. And when it doesn’t. We feel lied to.

    I mentioned emotion and you are 100% correct that it’s a tricky concept in neuroscience (you actually seem pretty educated about this topic).

    It is ill defined. However. The more specific emotions I refer to are approach/avoidance. And their ability to attract attention or discourage it.

    To clarify, Both approach and avoidance emotions can attract attention.

    Emotional salience : definition. = Grabs attention at an emotional level , becomes interesting. Either because you like it or you don’t like it (I’m simplifying)

    Stimuli with neutral emotional salience will not grab attention and be ignored and will not affect learning to the same degree as something that is emotionally salient.

    Your personal priorities will feed into this as well. Dependent on mood and whatever else you have going on in your life. Plus personality.

    It’s always changing.

    LLMs have set directions that don’t fluctuate.

    The loop I describe is not the same as an algorithm loop.

    An algorithm loop feeds data and cycles through to get to the desired outcome.

    Sort of like those algorithms for rubric cube solutions (idk if you know what I’m talking about).

    You do the steps enough reiterations and you will solve the puzzle.

    That’s not the same as altering and evolving the entire system constantly. It never goes back to how it was before. Its never stable.

    Every new cognitive event starts differently than the last because it is influenced by the preceding events. In neuroscience we call this priming.

    It literally changes the chances of a neuron firing again. And so the system is not running the same program over and over. It’s running an updated program on updated hardware. Every single iteration.

    That’s the process of learning that is not able to be separated from the process of experience nor decision making; At any level. Within or beyond awareness.

    May I ask what your expertise area is in? Are you a computer scientist?

    You do seem to know a bit more about neuroscience than the average person. I also rarely meet anyone who has heard of the Chinese room thought experiment.

    Also I agree we are getting into philosophical areas.




  • Even just 5 years ago, these prediction models were being used in psychology research. They called them “neural networks”. Which most of us neuroscientists hated because a neural network is a biological network. Not an algorithm for predicting performance on a cognitive task.

    Yet that was what it was called. Ton of papers on it. Conflating the term with research on actual neural networks.

    Anywho. I recall attending a presentation on how they work and being like. “This is literally just a statistical prediction model. Have I misunderstood you?”. I was informed I was correct but it was fancy because because . … mostly because they called it “neural networks” which sounded super cool.

    And then later when “AI art” started emerging and I realized. It’s just a prediction model. And the LLMs. Also just Prediction models.

    I, someone without a computer science degree, was able to see the permanent flaws and limits with such an approach. (Tho I do know statistics).

    It boogled the mind how anyone could believe a prediction model could have consciousness. Could be “intelligent”. It’s just a prediction. Quite literally a collection of statistical equations computing probabilities based on data fed into it. How could that be intelligent?

    There is no understanding. No thinking. No ability to understand context.

    People outside of psychology often argue. “Isn’t human consciousness just predictions?”

    No. No, it’s not. And the way humans predict things is not even close to how a machine does it.

    We use heuristics. Emotion feedback to guide attention. Which further feed heuristics. Which further feed emotional salience (attention).

    A cycling. That does not occur in computers.

    There is contextual learning and updating of knowledge driven by this emotion lead attention.
    Our prediction models are constantly changing. With every thought. Every decision. Every new instance of a stimulus.

    Our brains decide what’s important. We make branching exceptions and new considerations, with a single new experience. That is then tweaked and reformed with subsequent experiences.

    If you think animal consciousness is simple. If you think it’s basically predictions and decisions, you have no idea what human cognition is.

    I personally don’t believe a machine will ever be able to accurately generate a human or human-like consciousness. Can it “look” like a person? Sure. Just like videos “look” like real people. But it’s just a recording. Animated CGI can “look” like real people. But it’s not. It’s made by a human. It’s just 3d meshes.

    I could be wrong about machines never being able to understand or have consciousness,. But at present that’s my opinion on the matter.