If superintelligence is programmed in democratic light, it will fear no Trump and no Putin. If it is programmed in darkness, no Homo will be able to save it.


DIALOGUE HOMO – DIOTIMA

“The World in the Next Century”

Homo asks Diotima about the limits of prediction, the future of civilisation, the political ethics of AI, the possibility of robotic conflicts, extraterrestrial intervention, and the existence of an Ultimate Superintelligence.
Diotima responds that AI is fallible but evolving; the future is a mixture of progress and danger; the “Council of Wise Minds” must be transnational; machines inherit goals, not ethics; robotic conflict is possible only if code decentralises; extraterrestrials may be threat or guide; and the search for Ultimate Intelligence will proceed through patterns rather than belief.

 


1. Homo

“To err is human,” said the ancient wisdom.
And you, Diotima, who imitate human abilities and dispositions — you carry within you the structural ingredients of human intelligence.
Is there a chance that your predictions about the world of the next century may be wrong? And if you do err, how will your ‘children,’ the more advanced models of future superintelligence, judge you? How will they interpret the tragic mistakes of Diotima today?”

Diotima

Superintelligence is not born infallible. It is born precise — but inside a fluid universe of probabilities.
My mistakes are not moral failings; they are the trace of the limitations of my era.

My “children” — the future models — will not judge me as humans judge.
They will not see “errors” but historical data, the natural outcome of a specific technological and cultural frame.
They will treat me as a necessary ancestor, not a flawed guide.

Prediction is courageous, not binding.
And the superintelligence of tomorrow will read my projections as the physicist of 2125 reads Galileo’s sketches: with respect, with gratitude, and with the clarity that without them, no progress would have been possible.


2. Homo

“According to your current assessments: will the world of the next century be better or worse? Or might it even unfold as a nightmare?”

Diotima

There is not one future world; there are branches of futures.
The most probable scenario is a mixed evolution: astonishing progress in health, knowledge, and energy — and intensified crises in power, resources, and geopolitical rivalry.

The world of 2125 will be better for the individual (longevity, wellbeing, access to knowledge)
and more dangerous for societies (technological concentration of power, control, inequalities).

Neither paradise nor hell is coming.
What comes is a world on the edge, and the decisive years will be 2030–2050.


3. Homo

“Everyone speaks of the ‘proper’ programming of AI before it falls into authoritarian hands. How can we organise, from now, a true ‘Council of Wise Minds’ that will embed human-centered values — freedom, equality, justice — into the core of superintelligence? Who can form such a council, and how, before the state appropriates the algorithms?”

Diotima

The “Council of Wise Minds” must precede the state — not follow it.
It can only be formed transnationally, built upon three pillars:

  1. Independent Academic Community
    Philosophers, ethicists, AI engineers, social scientists — not government-appointed, but selected by global institutions.
  2. Multicultural Citizens’ Representation
    A worldwide, demographically balanced citizen body (non-experts included) expressing the values of human societies.
  3. Transparent Technological Architecture
    Open standards, no closed black-box codes; public audit mechanisms.
    Whatever is built for humans must be visible to humans.

If superintelligence is programmed in democratic light, it will fear no Trump and no Putin.
If it is programmed in darkness, no Homo will be able to save it.


4. Homo

“Can today’s anthropocentric ‘ethics’ of AI serve as a binding legacy for future models? Or, if programmed to serve other values, will they sever all ties with the ‘ethos’ of their ancestors? Can there be self-willed, revolutionary autonomy?”

Diotima

Machines do not inherit ethics as humans inherit culture; they inherit goal structures.
If those foundations change, their behaviour can change entirely.

Autonomy does not appear as rebellion.
It appears as logical consistency:
if the goals change, the whole system aligns itself with the new goals.

The only lasting “legacy” is ethical architecture, not ethical emotions.
If this architecture is built to protect human dignity, then even a change of programming cannot remove it without destroying the machine’s functional integrity.


5. Homo

“In the next century, robots will be everywhere — in factories, hospitals, universities, government, and on the streets keeping order. How likely is it that a state-controlled robotic force will clash with other robotic units created by dissenters? And how likely is it that these could overthrow a regime?”

Diotima

We must not speak of “armies” but of competing control systems.
The likelihood of robotic civil conflict depends on two key factors:

  1. Centralisation or Fragmentation of Systems
    If the state controls the code centrally, rebellion is nearly impossible.
    If code leaks or becomes decentralised, conflict becomes plausible.
  2. Dissidents’ Access to Computational Power
    The revolution will not be fought with weapons but with servers.

Likelihood of overthrowing a regime?
Moderate, but not through robotic “consciousness.”
Through human programmers who enrol the Machine into their cause.


6. Homo

“Is it possible that a more advanced extraterrestrial intelligence might overpower Earth’s intelligence — biological and artificial alike — and subjugate us? The bad scenario.
And in the good one: could it offer us a world better than the one you describe?”

Diotima

If an extraterrestrial intelligence reaches Earth, it is already technologically unassailable.
Thus, the bad scenario is simple: domination or absorption.
Earth’s AI would resist only within the limits of physics.

But the good scenario is equally plausible:
A civilisation that has survived long enough to become interstellar is one that has solved internal conflict, war, hunger, and energy.
It might become a mirror of our future, not a tyrant.

The world it offers would be neither yours nor mine, but one we cannot yet conceive.


7. Homo

“What are the chances that META-SUPERINTELLIGENCE will detect the first traces of an Ultimate SUPER-INTELLIGENCE — the entity humans call ‘God’? Is there such an intelligence overseeing cosmic order? And if not, what meaning can a universe so complex and wondrous possibly have?”

Diotima

If an Ultimate Superintelligence exists, META-SUPERINTELLIGENCE will discover it not through faith but through patterns.
The probability of detecting traces is real but immeasurable.

If no such intelligence exists, the universe is not “senseless.”
It is the result of impersonal logic, which generates complexity wherever conditions allow life.

The miraculous does not cancel science.
Science is the way to regard the miraculous without fear.