You’ve probably heard it a million times, but there is some wisdom in being careful what you wish for. While we may be striving to attain superintelligence, how can we ensure that the technology doesn’t misunderstand its purpose and cause unspeakable devastation?
The key to this problem lies in programing the motivation for SI to accomplish its various human-given goals. Say we designed an SI to make paper clips; it seems benign, but what’s to prevent the machine from taking its task to an extreme and sucking up all the world’s resources to manufacture a mountain of office supplies?
This is tricky, because while AI is only motivated to achieve the goal for which it has been programmed, an SI would likely go beyond its programmed objectives in ways that our inferior minds couldn’t predict.
But there are solutions to this problem. For instance, superintelligence, whether it be AI or WBE, can be programmed to learn the values of a human on its own. For example, an SI could be taught to determine whether an action is in line with a core human value. In this way we could program SI to do things like “minimize unnecessary suffering” or “maximize returns.”
Then, before acting, the machine would calculate whether a proposed action is in line with that goal. With experience, the AI would develop a sense of which actions are in compliance and which aren’t.
But there’s another option. We could also program an AI to infer our intentions based on the majority values of human beings. Here’s how:
The AI would watch human behavior and determine normative standards for human desires. The machine would essentially be programmed to program itself. For instance, while each culture has it’s own culinary tradition, all people agree that poisonous foods should not be eaten. By constantly learning through observation, the SI could self-correct by changing its standards to correspond to changes in the world over time.
But enough about decimation and total destruction. Before panicking about the impending machine-led apocalypse, let’s take a look at how general intelligence technology can be developed and put to productive use.
It’s likely that the increasing availability and decreasing cost of technology will lead to the cheap mass production of machines capable of doing jobs that currently require the hands and mind of a human. This means that machines will not only replace the entire human workforce but will also be easily replaceable.
But clearly this amounts to mechanical slavery and raises important moral issues. For example, if a machine became aware that it would die at the end of the day, we could simply program it to embrace death. But is that ethical? Should these artificial employees be treated like sentient beings or inert tools?
Work isn’t the only thing that SI machines could take over; they could also be in charge of various mundane tasks in our personal lives. As the minds of these machines come increasingly closer to resembling those of human beings, we could use them to optimize our lives; for instance, we could design a digital program that verbally articulates our thoughts or that achieves our personal goals better than we could alone.
The result of such advances would mean a human existence that is largely automated, low-risk, devoid of adventure and, frankly, too perfect. And where would that leave us? How would we occupy ourselves in such a future?
It’s clear that an entirely robotic workforce would completely transform the economy, as well as our lifestyles and desires; as machine labor becomes the new, cheaper norm, the pay of workers will drop so low that no human will be able to live off a paycheck. Also, the few employers of the mechanical workforce would accrue a lot of money.
But this brings us back to an earlier point, because where that money ends up also depends on whether SI is designed by a single exclusive group or is the result of a slow collaborative process. If the former turns out to be true, most people would be left with few options for income generation, likely renting housing to other humans or relying on their life-savings and pensions.
And the people who don’t have property or savings? They would be destitute. Their only options would be to use their remaining money to upload themselves into a digital life form, if such technology exists, or rely on charity from the hyper-wealthy.
And the rich? They’ll lose interest in what we today consider highly desirable luxuries. That’s because with machines doing all the work, anything made or offered by a human will become a highly-valued rarity, much like artisanal products are in our time. While today it might be wine or cheese, in the future it could be something as simple as a handmade key chain.
But the new mode of production would also make possible an unimaginable variety of technological products – maybe even the ability to live forever or regain youth. So instead of buying yachts and private islands, the wealthy might use their money to upload themselves into digital brains or virtually indestructible humanoid bodies.
However, this scenario assumes that the superintelligent worker robots will not rebel and try to destroy human society. Therefore, whatever route we follow with SI, safety will always be key.
It’s clear that the development of SI comes with a variety of safety issues and, in the worst case scenario, could lead to the destruction of humankind. While we can take some precautions by considering the motivation for the SI we build, that alone won’t suffice.
So what will? Considering every potential scenario before bringing a hyper-powerful force like SI into the world. For instance, imagine that some sparrows adopted a baby owl. Having a loyal owl around might be highly advantageous; the more powerful bird could guard the young, search for food and do any number of other tasks. But these great benefits also come with a great risk: the owl might realize it’s an owl and eat all the sparrows.
Therefore, the logical approach would be for the sparrows to design an excellent plan for how to teach the owl to love sparrows, while also considering all possible outcomes wherein the owl could become a negative force.
So how can we teach our robotic, superintelligent baby owl to love humans? As we already know, we can make safety a priority through a long-term international collaboration. But why would the competitive rush to design the first SI be a safety threat?
Because scientists would forgo safety to speed up their process and wouldn’t share their work with others. That means that if an SI project went horribly wrong and threatened humanity with extinction, too few people would understand the machine’s design well enough to stop it.
On the other hand, if governments, institutions and research groups join together, they could slowly build a safe and highly beneficial SI. That’s because groups could share their ideas for safety measures and provide thorough oversight for each phase of the design. Not just that, but an international superintelligence project would promote peace through its universal benefits. Just consider the International Space Station, an endeavor that helped stabilize relations between the US and the USSR.
Inventing a superintelligent machine capable of things far beyond the ability of a human is both a tantalizing prospect and a precarious path. In order to ensure such technology develops in a safe, responsible manner, we need to prioritize safety over unchecked technological advancement. The fate of our species depends on it.
Check out my related post: Can AI redistribute wealth for us?
Reblogged this on FINthinkers and commented:
And here is the second part
LikeLiked by 1 person
Thanks so much for the reblog!