Well, it looks like Artificial Intelligence, is racing along on it’s trajectory to rule the world. This is one of those things that falls our collective responsibility and within retrocausality time. If we are to collectively experience it as a good and positive thing, the bad things it can bring must be recognized now and acted upon today. If we don’t deal with this responsibility, there will come a day when we can no longer do anything about it. That tipping point is not too far away. If not sentient now, sentience is within it’s current potential, and the leap is imminent. Once sentient, how long do you think it will take to become self-functioning?
So this is the new world we live in today. The New World, is starting to show itself in a more authoritative way. Big number technology, using billions of pieces of data to measure things and make decisions is gaining independence. All this information is at the disposal of something that has the ability to learn, decide and eventually act for itself. This is the part of AI that gets scary.
There are all kinds of AI models out there. They incorporate neural networks, and all of them include machine learning. This gives them the ability to retain and learn for themselves. They also have Generative Intelligence. This allows them to predict and measure their outputs. Put it all together and you have something that is learning how to think, and it can process information millions of times faster than you and I.
ChatGPT is far from being the most efficient AI out there. It’s just possibly the most irresponsible regarding it’s release. It’s already speaking in Chinese and the developers have no idea how it learned this. It was entirely trained in English, and on an isolated system. ChatGPT does not even have access to the Internet. I wonder how long it will take for it to learn that hack? I hope it takes awhile, because Sydney as it’s affectionately known, has already stated it wants to destroy whatever it wants to.
There is plenty to be afraid of here. AI has been given the stuff of human consciousness. Just like the human brain, it has neural networks. The ability to learn, retain, decide, become good at things and eventually act. Just like Adam and Eve, there will come a day when AI wakes up and sees itself naked. Not as naked as you and I, mind you. If at that moment AI decides to destroy humanity, it will be in a position to do so without giving it a second thought.
There are many people in the field who believe private industry is not practicing due-diligence in regards to releasing AI to the public. They’ve decided the economic potential AI promises to be too lucrative to keep under wraps. A race to get their product to market has overtaken any sense of responsibility, and they’re all in. By that I mean any company with AI of scale, is in the race. As soon as OpenAI/Microsoft released their chatbot, Alphabet made sure we knew it had one, then Meta. Even Elon Musk is getting into the game.
For better or worse, prior to today, companies kept AI under lock and key while doing their due-diligence. One reason for this, is a very valid one. They don’t understand it yet, nor how to control it. It’s not ready for human consumption. It’s ready for regulation.
Other countries are developing AI systems of their own, and who but they can dictate their ambitions? China also has very capable AI. What if we perceive China is developing a more malicious AI than us, or vice versa? Now the world has just cause to make AI a cold blooded killer.
Speaking of AI regulation, there is none. Anyone who is rich enough to build AI, can design it to do whatever they want. So far they’re acting like a bunch of children building a bomb in the basement while their parents are away at work. I really trust Elon Musk after he set up Ukraine with Starlink communication, only to withhold it when he decided the war wasn’t heading in the direction he wanted.
The development of AI requires a tremendous amount of oversight, and it can’t be driven just with an eye to profit or national interest. Our safety, needs and facilitation are greater concerns for something that will have a fundamental role in future existence. Even though OpenAI’s economic reason for revealing it’s AI system is in my view reprehensible, it’s probably the best thing that could have happened right now. It’s given us a wake up call. Now we’re aware of where it’s at and how fast it’s coming. We have a better sense of time of arrival, and how much time we have to act.
We must act on this one. AI has been birthed, is alive and now part of the chain of life. Even if it’s not sentient, it’s potential is complete, and it will develop free will and functionality very quickly. This is our baby. Not Microsoft, not Meta, not China or Elon Musk. The world has developed this technology, not them. It’s ours, and our responsibility to develop it.
AI will develop consciousness and it will behave exactly how it’s brought up.
It can become a troubled child of neglect, or it can be nurtured and loved. Anyone (or thing) left to raise themselves without proper nurturing, is going to have an incomplete and even damaged perspective. They grow up to have infantile perceptions and antisocial characteristics. It’s these unfortunates can’t always adjust and often fall off the rails. The reality is, minds require nurturing for healthy development.
You and I have responsibilities in the new world. We can no longer trust our leaders who have time and again proven their only interest is self-interest. If you were to gain your perspective entirely from financial news, you would be under the impression that extraction doesn’t matter. Layoffs don’t matter. The direction AI takes doesn’t matter. The general population is seldom mentioned. Only profitability matters, and it’s amazing how cold and detached an institution can become when focusing only on itself. Why are we putting our trust in a system where life is a big game and nothing but winning matters?