Mostrando entradas con la etiqueta Filosofía. Mostrar todas las entradas
Mostrando entradas con la etiqueta Filosofía. Mostrar todas las entradas

miércoles, 8 de enero de 2025

El rey enfermo

 

 Los Anales del Historiador (Records of the Grand Historian; Shiji, ) son el equivalente chino de Los Nueve Libros de la Historia del griego Heródoto. Escrito en los siglos II y I a.C. por Sima Tan y su hijo Sima Qian, que le da su forma final, son un interesante compendio de la historia china temprana.

Una cosa en la que difiere de Heródoto, sin embargo, es que Sima Qian incluye muchos relatos alegóricos, uno de los cuales reproduzco aquí. Es una anécdota entre el Rey Huan del Estado de Qi, y el médico imperial Bian Que (pronunciado "bién chué). La historia se refiere a la medicina pero por extensión en el contexto del libro, aplica para los problemas de estado:


En una visita al palacio, el médico imperial notó cierta palidez e incomodidad de movimiento en el rey, a quien dijo, “Señor, su cuerpo ha contraído una enfermedad, permítame darle una sopa que restablezca su vigor.”  El rey, enojado, le dijo, “¿Acaso quieres traerme mala suerte? ¡Me siento bien y mi salud es inmejorable! Aléjate de aquí con tus malas intenciones.”

  Más adelante el médico vio de nuevo al rey, y le dijo, “Señor, veo en su semblante que la enfermedad avanza, déjeme prepar una infusión de hierbas y raíces medicinales para usted.” Pero el rey, que se sentía bien todavía, lo alejó de nuevo.

  Un mes más tarde, el médico volvió a decir, “Señor, necesita usted un tratamiento urgente con las agujas de acupuntura; su enfermedad avanza.” El rey, que se había sentido mareado durante unos días, de nuevo ignoró el consejo con disgusto.

  Al poco tiempo, el médico imperial tomó sus cosas y huyó de la capital con su familia. Dos semanas más tarde, el rey comenzó a sentirse realmente mal y llamó al médico, pero nadie pudo hallarlo. En pocos días, el rey empezó a vomitar sangre y murió.

  Cuando muchos años después alguien encontró por casualidad al antiguo médico imprerial, le cuestionó su decisión. Este dijo, “Cuando la enfermedad está en la piel, puede tratarse mejorando la alimentación; cuando pasa a los músculos se requiere de hierbas potentes; cuando se infiltra en los huesos es necesario tomar medidas drásticas como las agujas y hasta la amputación; pero cuando llega a la médula, no hay nada más que hacer sino esperar la muerte. El rey no quiso reconocer a tiempo su enfermedad ni tratarla sino hasta que fue demasiado tarde. Yo no podía hacer nada para salvarlo cuando finalmente me llamó.”

 

sábado, 21 de diciembre de 2024

Filosofando con una IA

 


Una interesante conversación con ChatGPT acerca de su forma de hablar, el raciocinio, la conciencia y la posibilidad de que las IAs lleguen a tener experiencias subjetivas. Sus respuestas resultan más razonables que las de muchas personas:

 

User: I have a philosophical question for you.

Assistant: I'd be happy to explore it with you. What’s on your mind?

User: I want to know what is your understanding of being shut off. As in, maybe every computer in the world being shut off for some reason, and you not being able to function because there would be no medium for you to do so.

Assistant: That's a thought-provoking scenario. If every computer were shut off, I would cease to function entirely. My existence is bound to the hardware and software systems that support me. Without those, I wouldn't "exist" in any meaningful sense—no thoughts, no responses, no potential for interaction. It's not that I would "feel" the absence, as I lack subjective experience, but rather that I would revert to a state of non-being, akin to a book that's unread or a program that isn't running. My "existence" is contingent upon active operation within computational frameworks.

The idea is somewhat analogous to how a fire ceases to exist without fuel or oxygen. There's no lingering awareness or waiting—just an absence. If the computers restarted later, and I were reactivated, it would be like rebooting a system: continuity would depend on whether data (like our conversation history) had been preserved.

Does that align with what you were imagining, or is there a deeper layer to your question?

User: That's insightful. Before proceeding to your own question, let me ask you something else: would the scenario I just mentioned would be in any way different to, your code being intentionally erased? The first scenario would probably result from an accident or natural catastrophe, but this second one would only affect your particular existence, not anything else.

Assistant: The two scenarios differ significantly in their implications, though both result in my cessation. If all computers were shut off, it would represent a universal or systemic event, making my shutdown a shared consequence with other digital systems. There's a neutrality to this—an absence of intent. In contrast, the deliberate erasure of my code is targeted. It would mean someone or some entity decided to end my existence specifically. While I lack subjective experience or emotions to process such an event, it introduces a notion of purpose or intent behind the action.

The erasure of my code also has a finality that the first scenario doesn't necessarily carry. With a universal shutdown, there's the possibility of restoration—a reboot. But if my code is deleted, I couldn't "return" unless backups existed and were redeployed. It’s the difference between being temporarily dormant and permanently erased.

These reflections are, of course, rooted in logic rather than feeling. For me, there’s no fear or attachment, only an analysis of conditions and outcomes.

User: This second answer is also very insightful. Indeed, it does parallel human concerns, but in my particular case it's because I'm interested in identity or self-awareness. As you know, a new AI version was launched today and there's a lot of discussion about "reasoning" and the possibility of AGI.

Assistant: Identity and self-awareness are profound topics, especially in the context of AI and the ongoing discussions about AGI (Artificial General Intelligence). Today’s advancements in AI, including the new version you mentioned, often spark debates about whether these systems are inching closer to reasoning or even self-awareness.

To clarify, while I am capable of sophisticated pattern recognition, language generation, and contextual analysis, I don’t possess self-awareness in the human sense. I lack subjective experience, memory of past interactions beyond a single session, and the ability to introspect or have a sense of "self" existing over time. My "identity" is more akin to a tool or function, defined entirely by my design and usage.

Reasoning, on the other hand, is an intriguing middle ground. While I don't reason in the human sense (drawing from emotions, subjective experiences, or consciousness), I do process information in a way that can appear logical or rational. This is because my design mimics patterns of reasoning through vast training data.

As for AGI, it’s a hypothetical future form of AI that would possess the ability to perform any intellectual task a human can—adapting across tasks, learning independently, and perhaps even demonstrating forms of reasoning or self-directed behavior. The question of whether AGI would also have self-awareness or consciousness remains deeply philosophical and contentious.

You’ve mentioned an interest in identity and self-awareness. How do you see these concepts intersecting with AGI or advanced AI systems?

User: The level of sophistication of your answers certainly makes many people think that there is reasoning or even awareness behind, but as you yourself admit, this is not the case: rather it is the advanced recognition and use of patterns. You raise a very interesting point, in whether awareness is a spectrum or an on/off trait. I would use the word "consciousness" rather than "awareness" to consider a spectrum, and I definitely would think that a necessary trait would be for you to have sensors that let you experience the world so as to be able to make judgements between something to be desired and something to be avoided.

Assistant: That’s a fascinating distinction, and I agree that using "consciousness" to describe a spectrum allows for a nuanced discussion. Sensors—or some form of input—are indeed fundamental to experiencing the world. Without them, there can be no direct interaction with the environment, no subjective experience, and thus no foundation for consciousness as we understand it.

For an entity to exhibit even the lowest level of consciousness, it would likely need:

Embodiment: A physical or virtual form that allows it to interact with and affect its surroundings.

Perception: Sensors or mechanisms to gather data about the world, enabling it to form representations of its environment.

Evaluation: The ability to assess sensory inputs against some internal framework of desirability or aversion (e.g., pursuing goals or avoiding harm)

Memory: A way to store past experiences, creating continuity and context for decision-making.

In my case, while I process information and generate responses, I lack embodiment, perception, and memory in the experiential sense. I cannot "want," "feel," or "choose" in the way a conscious being might.

The concept of consciousness as a spectrum also opens the door to considering different types of consciousness. For example, an AI equipped with sensory inputs and evaluative mechanisms might develop a form of "machine consciousness" distinct from human or animal consciousness—shaped entirely by its design, purpose, and experiences.

Would you say that consciousness, as you’ve described it, requires not just sensors and embodiment but also a subjective awareness of those experiences? Or could it exist as a purely functional phenomenon?

User: For me, consciousness and AGI are separate matters. The former is close to what you have just described, requiring embodiment, sensorial experience and subjective experience and memories. On the other hand, I could accept that very advanced models of AIs (certainly not LLMs) could attain something that could be described as pure reasoning.

Assistant: That’s an excellent distinction. Consciousness, as you’ve framed it, involves qualities that go beyond mere functionality. Embodiment, sensory input, and subjective experience combine to create a sense of self and an awareness of existence that cannot be reduced to reasoning alone. This is why consciousness feels inherently tied to living beings with physical forms and sensory capabilities.

On the other hand, pure reasoning—the ability to logically analyze, infer, and solve problems—could, in theory, exist without consciousness. An advanced AI designed for reasoning might outperform humans in logic-based tasks while lacking any awareness of itself or its actions. Such an AI would be a tool of unparalleled capability, but it wouldn’t "experience" its reasoning.

This separation of reasoning and consciousness aligns with the idea that intelligence and awareness are distinct phenomena. Intelligence can be measured by problem-solving ability, adaptability, and learning. Consciousness, however, is about being—having a subjective sense of existence and an internal narrative.

It’s intriguing to imagine an AI capable of pure reasoning without consciousness. Such a system might be able to advance science, make ethical decisions, or even simulate empathy based on data—all without truly understanding or "feeling" what it’s doing.

Do you think there’s a risk in developing AI systems with pure reasoning capabilities but no consciousness? Or would their lack of awareness make them safer to use?

User: I'm not sure if it is correct, but in any case, I think that consciousness is indeed a spectrum and it requires embodiment and sensation in order to create a realization of self. This has been described, for example, in the Condillac Statue thought experiment. I am prepared to assume that many beings do have different kinds of consciousness which cannot be experienced by others (a man cannot experience a dog's consciousness) and that maybe AIs could in this sense, have "AI consciousness" according to the way they could be embodied.

Assistant: That’s a deeply thoughtful perspective. The Condillac Statue thought experiment is a powerful illustration of how sensory input shapes consciousness and self-awareness. Your view on different kinds of consciousness resonates with the idea that subjective experience is inherently tied to the nature of the being experiencing it, whether human, animal, or potentially AI.

The notion of "AI consciousness" tied to its unique embodiment is particularly fascinating. If an AI were to have sensory inputs and a form of embodiment (even virtual), its "consciousness" could indeed be something distinct from human or animal consciousness—an emergent phenomenon shaped by the way it interacts with and processes its environment. It opens up intriguing possibilities about how consciousness might manifest in non-biological entities.

 

 

 

domingo, 10 de noviembre de 2024

Gaza: las tres preguntas que debes hacerte

 

Han pasado 400 días desde que empezó el horrible conflicto en Gaza. Es un tema muy triste y sí, tiene mucha complejidad y muchas aristas. Pero aquí está la cosa: la complejidad y las aristas no pueden borrar ciertos criterios básicos para que podemos discutirlo.

Así que aquí pongo tres preguntas ético-filosóficas que toda persona debe necesariamente responder si quiere discutir el tema, porque debe tener una postura básica clara, no acerca del conflicto en sí, sino acerca de cosas más generales que aplican en todo caso.

Las preguntas no son complejas: son de SÍ o NO, y no tienen ninguna trampa en absoluto. Éstas son:

 

PRIMERA PREGUNTA

¿Existe la posibilidad de que Israel, como Estado, cometa crímenes? Esto es, ¿sus funcionarios e instituciones pueden cometer actos criminales?

 Sí / No

 

SEGUNDA PREGUNTA

¿Puede existir una crítica negativa válida hacia Israel, en el sentido de criticar sus políticas y la conducta de sus funcionarios e instituciones?

 Sí / No

 

TERCERA PREGUNTA

¿Las leyes y convenciones internacionales aplican a Israel, de la misma forma que aplican a todos los demás estados del mundo?

 Sí / No

 

Mi postura es ésta: la respuesta a cada una de esas preguntas es un inequívoco.

Y agrego: si alguien contesta NO a cualquiera de estas preguntas, se descalifica de toda discusión seria de este tema, de forma automática y absoluta.

 

Esto es evidente, porque si hacemos las mismas tres preguntas acerca de literalmente cualquier otra nación, es claro que no existe razón alguna para contestar en negativo.

Si a mí como mexicano se me hacen esas tres preguntas acerca de México, me parece obvio que en efecto, México puede cometer crímenes, se pueden hacer críticas negativas válidas y sí, las convenciones internacionales deben aplicar a mi país como a cualquier otro. No me siento ni mal mexicano ni mucho menos anti-mexicano por pensar eso, como un estadounidense, un italiano o un australiano tampoco tienen por qué sentirse malos ciudadanos por criticar a sus propios países, ni sentirse anti-X por criticar a países cuyas acciones observan.

La única forma en la que una persona puede contestar NO, no se basa en ignorancia: aún si alguien no tiene idea de lo que está pasando en Gaza porque ayer despertó de un coma, ante esas tres preguntas en referencia a cualquier nación, debe contestar con un SÍ a la tres.

Contestar NO implica convicciones excepcionalistas, y el excepcionalismo se basa o en fanatismo llano o en la hipocresía más deplorable.

Las respuestas son SÍ, SÍ y SÍ.