ACTIVE WORK IN PROGRESS.

HBL

TheHarry BinswangerLetter

  • This topic has 2 voices and 1 reply.
Viewing 1 reply thread
  • Author
    Posts
    • #54754 test

      I’ve just listened to this week’s MoTM on Objectivism and Pragmatism, where at one point HB points out norms of cognition generally wouldn’t matter for Rand’s immortal indestructible robot, and specifically that it wouldn’t care about avoiding contradiction. At the risk of overindulging in an intentionally absurd hypothetical, I wanted to make an even stronger claim: The robot can’t contradict himself!

      As HB points out, contradictions neither exist in reality nor even in a single moment of awareness. The phenomenon of “holding a contradiction” or “contradicting oneself” refers to a fact about one’s actions, particularly one’s cognitive actions, over time. But what would this mean for the robot?

      Considering existential actions first: I might one day go to sleep early on the grounds that I hold 8 hours of sleep as vital for human health, and on the next go to sleep late without any good superseding reason and thus implicitly act on the premise that 8 hours is not vital for human health. But if a robot held 8 hours as vital for human health, or not, it would not change his existential actions in any way. And there’s no equivalent of “vital for robot health” by the nature of the hypothesis.

      But what about cognitive actions? If one minute I look at a brown table and judge “this table is brown” and then a moment later “this table is green,” I have contradicted myself. But what makes the latter judgment false? By making that judgment I am grouping the brown table with the grass and the jello as distinct from the brown fence and the brown dirt. But why can’t I just be grouping along some other contrived similarity that the table does share with the grass and jello and distinguishes it from the fence and dirt, modifying my understanding of what precisely the similarity ‘brown’ refers to is as part of normal conceptual development (see grue vs bleen)? As HB discussed at the MoTM, I think all of the reasons (e.g. fundamentality) ultimately come down to life vs death, and so there is no way to distinguish the robot simply refining his concept from the robot contradicting himself.

      /sb

    • #103743 test

      Re: Shea Levy’s post 54754 of 5/27/25

      Good thinking (which, incidentally, the unaffectable robot couldn’t be said to do).

      I want to take it further, and make a correction regarding forming concepts of characteristics, such as “green”:

      I am grouping the brown table with the grass and the jello as distinct from the brown fence and the brown dirt.

      To grasp “green” is not to classify some entity with other entities, but to classify a characteristic (hue). There is no concept “green things”; it would be a gigantic package-deal to form a mental file for all the various things, from grass to lime jello, that have a green color.

      The use of the word “grouping” obscures this; the word “integration” makes it clear that it would be hopeless to attempt to integrate grass, some frogs, some cars, jello of just one flavor, some pond scum, pine needles, etc., into one unit. To form concepts of characteristics is not to form a concept for entities having that characteristic.

      To take Shea’s idea further, I don’t think the robot could make a judgment. Judging that S is P is a goal-directed action and the robot can’t engage in goal-directed action because it faces no alternatives.

      The robot can simulate goal-directed action, behaviorally and (if it is imagined to be conscious) mentally. But each would be only a simulation.

      A computer, for instance, can’t make judgments. It can combine and separate electric voltages and store a pattern of high and low voltages in its transistors, but none of that would be judging, even if the computer were imagined to be conscious. 

      This is why AI agents aren’t embarrassed when you tell them that their output was wrong. :)

      /sb

Viewing 1 reply thread
  • You must be logged in to reply to this topic.