Challenges Companies are Facing while Developing AGI Toggle Menu Homepage
Looking across the AGI research field today, there are two common categories of problems scientists are focused on: prediction and control. Prediction models try to learn about a domain (such as weather patterns) and understand how it might evolve, while control models prompt agents to take actions in that environment. Building a successful path to AGI requires understanding and developing algorithms in both spaces, accounting for all the variations that our natural and social environments throw at us, from how viruses mutate or how language may evolve in use and meaning over time to how to help produce energy from fusion power.
From a science-fiction perspective, it seems quite reasonable to anticipate that AGI-based robots that look and behave like us will ultimately outmaneuver human beings due to their superior intellectual and physical capabilities. On the ethics side, there are two main aspects to consider. First, we need to think carefully about the “programmed” behavioral logic we may introduce into the robots. Some examples include the self-driving car that, based on its programmed logic, decides in an unavoidable collision whom to protect and whom potentially to injure either its passenger or the pedestrian crossing the road.
On the other hand, an AGI sales program is implemented to sell useless insurance products to uninformed consumers to improve profits. The far more relevant aspect of the problem is the use (or abuse) of AGI devices to hide human unethical behavior. Examples include drones bombing the allegedly wrong targets, or myriad other actions in which companies may respond once their misdeeds are uncovered in an attempt to deflect blame onto the robots with statements along the lines of, apologizing for the incorrect actions of its robots or trying to improve the ethical rules incorporated into the AGI devices. One can also consider the operating systems that degrade older versions of the hardware they run on by reducing their performance, with the aim of stimulating users to dispose of old hardware and buy new devices, thereby increasing sales. Since AGI robots have the potential to exacerbate the worst tendencies in human behavior, the challenge lies in where the line can be drawn between direct human accountability and accidental, unintended AGI behavior.
The researchers are still quite far away from AGI robots reaching the level of outmaneuvering humans and becoming autonomous beings in themselves. It is arguable whether we will ever reach such a scenario (despite being possible). To reach the AGI point of so-called singularity, there is a long road of cumulative progress required, from logically optimized programmed behavior to animal-like sustained and efficient learning most likely based on artificial neural networks and artificial self-awareness and, finally a form of self-sustainability or a system to ensure continuation and evolution. The ethical dimension, on the other hand, is becoming increasingly relevant. Not so much because of the accidental emergence of “unethical” behavior of AGI-enabled robots, but because of the unethical use of AGI by humans in the business world and in social affairs. For example, the selling of useless insurance products or the misuse of drones in warfare. These areas, where crucial and important issues exist in the short term, require a deliberate consideration of those ethical ramifications.
Areas, where concerns may be immediately identified, related to the use of drones — mainly in, but not limited to, warfare — and in the use of AGI in the areas of security, privacy, and sales. The main concerns relate to the current lack of legislation and regulations, not only about what can be done with AGI devices like airports have only recently supported legislation to restrict the use of private drones that dangerously interfere with landing planes, but also in terms of what kind of AGI itself can be developed.
Join Our Telegram Channel for More Insights.Join Now
AGI