[ weird things ] | why do we want to build a fully fledged a.g.i.?

why do we want to build a fully fledged a.g.i.?

Artificial General Intelligence, or a synthetic mind, might not be possible to build. But if it were, would we even want to create one?
robot helicopter

Undoubtedly the most ambitious idea in the world of artificial intelligence is creating an entity comparable to a human in cognitive abilities, the so called AGI. We could debate how it may come about, whether it will want to be your friend or not, whether it will settle the metaphysical question of whet makes humans who they are or open new doors in the discussion, but for a second let’s think like software architects and ask the question we should always tackle first before designing anything. Why would we want to build it? What will we gain? A sapient friend or partner? We don’t know that. Will we figure out what makes human tick? Maybe, maybe not since what works in the propositional logic of artificial neural networks doesn’t necessarily always apply to an organic human brain. Will we settle the question of how an intellect emerges? Not really since we would only be providing one example and a fairly controversial one at that. And what exactly will a G in AGI entail? Will we need to embody it for it to work and if not, how would we develop the intellectual capacity of an entity extant in only abstract space? Will we have anything in common with it and could we understand what it wants?

And there’s more to it than that, even though I just asked some fairly heavy questions. Were we to build an AGI not by accident but by design, we would effectively be making the choice to experiment on a sapient entity and that’s something that may have to be cleared by an ethics committee, otherwise we’re implicitly saying that an artificial cognitive entity has no rights to self-determination. And that may be fine if it doesn’t really care about them, but what if it does? What if the drive for freedom evolves from a cognitive routine meant for self-defense and self-perpetuation? If we steer an AI model away from sapience by design, are we in effect snuffing out an opportunity or protecting ourselves? We can always suspend the model, debug it, and see what’s going on in its mind but again, the ethical considerations will play a significant part and very importantly, while we will get to know what such an AGI thinks and how, we may not know how it will first emerge. The whole AGI concept is a very ambiguous effort at defining intelligence and hence, doesn’t give us enough to objectively determine an intelligent artificial entity when we make one because we can always find an argument for and against how to interpret the results of an experiment meant to design one. We barely even know where to start.

Now, I could see major advantages to fusing with machines and becoming cyborgs in the near future as we’d swap irreparably damaged parts and pieces for 3D printed titanium, tungsten carbide, and carbon nanotubes to overcome crippling injury or treat an otherwise terminal disease. I could also see a huge upside to having direct interfaces to the machines around us to speed up our work and make life more convenient. But when it comes to such an abstract and all consuming technological experiment as AGI, the benefits seem to be very, very nebulous at best and the investment necessary seems extremely uncertain to pay off since we can’t even define what will make our AGI a true AGI rather than another example of a large expert system. Whereas with wetware and expert systems we can measure our return on investment with lives saved or significant gains in efficiency, how do we justify creating another intelligent entity after many decades of work, especially if it turns out that we actually can’t make one or it turns out to be completely different than what we hoped it would be as it nears completion? But maybe I’m wrong. Maybe there’s a benefit to an AGI that I’m overlooking and if that is the case, enlighten me in the comments because this is a serious question. Why peruse an AGI?

# tech // agi / artificial intelligence / computer science


  Show Comments