Back in 2015, I addressed the concern then of Stephen Hawking and Elon Musk who were worried about what might happen as a result of advancements in Artificial Intelligence. They’re concerned that robots could grow so intelligent that they could independently decide to exterminate humans. Today, it has only gotten worse with GPT-4 open for everyone to try. In doing so, they are training the computer and expanding its knowledge base. Musk, with a gaggle of others, have penned a letter calling for a “pause” in AI development.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
I have tinkered with AI since the early 1970s. There is no doubt these guys are influenced by concepts like in the movies Terminator and the Matrix. But from a real-world programming side, to outdo human thinking is easy. A computer model can far surpass humans in so many ways. What we have done in finance is unparalleled, but the key here in our system was to ELIMINATE human emotion. Only in that matter has Socrates been able to beat human judgment which is always flawed.
We could create an AI that is better than any medical doctor for there to it is offering only an “opinion” which is not always correct. A computer that had the full database of diseases could sort out things in the blink of an eye. Indeed, I contracted a parasite that went into my left eye. I could feel it. The doctor would not listen, He sent me to some specialist for something else and I told him what the issue was. Only because the same thing happened to him, he called my doctor and said this guy has a parasite. He then sent me to an infectious disease specialist who in just 1 minute looked at my blood work and said yes, you have a parasite. To this day, I have lost some vision in my left eye because nobody would listen. If they never experienced it, they would not even think about it. A computer would not make that human mistake.
What these guys are talking about is what I would call an open-ended AI system meaning it has no actual purpose. That is a black box and allowing a computer to develop into areas nobody has even thought about, could pose a danger more on the lines of the MATRIX or Terminator. They wrote in their letter:
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous
race to ever-larger unpredictable black-box models with emergent capabilities.”
I am pretty good at programming. This is all conceptual design. In the case of Socrates, it is confined to the financial markets. It is not going to surf the web in search of the answer to what is the name of Lady GaGa’s dog. Socrates will not discover the cure for cancer. It does not have a medical database. The type of AI that they are talking about is limitless machine learning that can write its own code and go in directions that nobody thought about. Let’s start with a description of the actual real-use-case problem. Why would you even need such a program to go in directions that a human could not even imagine?
The government does not want independent thought – they do not even want intelligent police for the same reason Stalin kill intellectuals. The government wants a mindless and emotionless drone. They want robot police and robot army who follow orders and will never hesitate. As I have stated, when the police and military no longer follow orders and side with the people, then revolutions take place. Those in power know that. Hence, they want robots who will control the mob, kill us when ordered, and for that, they do not need full unlimited AI that could also turn on the government.
The AI that is now unfolding with no direction and just letting it go and seeing what develops may be interesting for a lab experiment. But we must respect that there MUST be limitations. Socrates has beaten everyone, including me. But it is confined to this field. It has a purpose and no design would have ever allowed it to go off and explore other fields. There was no rationale to create such an open-ended machine learning system. It’s confined to the world economy, capital flows, weather, and geopolitical developments.