QUESTION: Martin,
I just read an interesting article by Stephen Hawking extrapolating on both the benefits and dangers of AI.
He specifically referenced the dangers by saying “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
In my view, this flies in the face of the cycles, and in regards to you creating and bringing Socrates online, do you think AI will ever reach such a point? The capabilities of Socrates seem to be only the tip of the iceberg for what it can unleash and teach us.
Regards,
M
ANSWER: I do not believe that a computer will become conscious. The two primary theories that drive these ideas are (1) you are one day able to copy the human brain like a file to a computer and somehow therein is a conscious being, and (2) the theory that consciousness simply emerges from a certain amount of connecting data.
In both cases, this is somehow anticipating the creation of a soul. I do not believe we can accomplish that at all.
The fundamental aspect of AI boils down to two categories:
(1) The fake AI where by you create a bunch of data and you have an index that is merely parsed: i.e. I have a headache, fever, vomit, and have chills: The computer responds you have the flu. This is merely an expert system – not actually a real thinking machine. There are versions like Apple’s Siri that are effectively just fancy look-ups.
(2) The real AI has to be able to research and create its own knowledge-base. This is our structural design and it is the only way to achieve real knowledge rather than a mere look-up.
I do not see it being possible to move to the dangerous mode of AI for that requires free will and consciousness that I do not see how to code assuming it is even possible.
In the physical sciences, we build from each person and retain that knowledge. In the social sciences, we retain nothing and start anew each and every time. In our most important aspects of social-economy that instigates war, we continually fly by the seat of our pants sticking our finger into the candle flame and expecting it not to burn.
What Stephen Hawking is saying is a what-if scenario that by no means are we close to such technology nor am I sure it can even be accomplished no less what would be the benefit of creating a computer that could have a bad hair day.