The Nines


A Ban on Offensive Autonomuos Weapons

4 friends are chatting in their group message after class…

Diane: Guys… our robot still doesn’t work :( Maybe we should just add a bunch of weapons onto it and be on the attack during the competition.

Gauri: Dude! We can’t build autonomous weapons; haven’t you heard that a bunch of really high-up researchers including Elon Musk are calling for a worldwide ban on offensive autonomous weapons? 😓😓😓😓

Mark: Well, if we don’t add weapons, some other group probably will. So we probably should have them to be safe. How would this even work?

Gauri: But that’s why a worldwide ban of these weapons is necessary! Just like we have tried to tackle nuclear weapons and climate problems… I guess we haven’t been so successful at the latter. Not getting into that right now.

Stark: First off, to discuss this problem, we have to understand the statement first. How do we define an offensive autonomous weapons? The one thing that came to my mind is an intelligent guided missile; the sole function of this is to kill and destroy.

Diane: Well, it doesn’t have to be a missile, I think AI is on the verge of being autonomous weapon. It is autonomous for sure, since it can think and act on itself and it can definitely be used as a weapon. Say we embedded a super advanced AI into people’s phones and collect biometric data to come up a way to target and kill individual based on the data it collected. Also, like what we discussed during class, let’s say the AI developed at the beginning was not offensive, but if the trainer or the data set that it train itself with was offensive, it would obviously get more and more offensive. This would not be the AI’s fault, instead, it would be the public’s fault for having those information floating around.

Stark: AI is a tricky topic since it can definitely be massively weaponized.

Gauri: Okay, now since we have our definitions straight and we have a working example, we can continue our discussion. Okay, so if we just use AI as an example, do you think we would be able to ban the AI research and development?

Mark: Well, I don’t think it’s wise to ban AI research, as the research itself definitely provides us with more benefits than harms and AI is the forefront of research at this moment, so I think it would be super hard to just stop all the existing research. The weapons are definitely a specific case that need to be addressed.

Diane: I also don’t think that any country would implement this policy and even if this policy does exist, I don’t think people would abide by it, just using the game theory reasoning we learned in class. Remember the prisoner’s dilemma? 😅👥

Gauri: Omg remember how I won the game theory question! 😆😆 But yeah, maybe banning it would not be practical then?

Diane: I think so, the reason being that the research is just way too valuable and it explores more potential beyond our current human knowledge. We are essentially trying to create something that’s better than human. Once it achieves full morality, meaning that if the program could act and choose to act itself without any top-down instructions from the designer, it would definitely be beyond meaningful human control.

Stark: Every country has their brightest minds working on this research, so it would be unreasonable for them just to stop, since they probably already poured a lot of money into the project. The stakes are just way too high for them to give up midway for some sentimental reason, that is the program has a potential of being an offensive autonomous weapons. 💣💣

Diane: Well many of the brightest minds are actually very opposed to military research. This ban proposal shows exactly that, with people like Elon Musk and Stephen Hawking, but also many other professors in academia. Many Cornell faculty do not accept military funding as well.

Gauri: I feel like one thing we are overlooking is how this normalizes war. There is less consequence for sending weapons out into other countries. I understand that there is a current state of research, and maybe a full-on ban might be infeasible. But I think there should be much more awareness raised on the issue where going into research for AI+weapons is not very desirable anymore. Not to mention that countries that don’t have the money to make these weapons are the most vulnerable. Private corporations are tough to fight, but with worldwide support coming from powerful organizations at the level of the UN, this can be much more contained. 😓😓😓

Mark: Okay, then how would you propose to solve this issue then?

Stark: We could figure out an efficient way to evaluate the research from its current stage, its past results and its future potential. I would suggest that we could establish a world council consist of top of the line computer scientists, engineers and philosophers to go over those issues and every country should be represented! The point of this council would be to make sure all countries understand where other nations are in this research, and can decide what the worldwide view should be. There can be sanctions if countries don’t comply, similar to how other worldwide deals are implemented.

Diane: Civilians also deserve to understand what research is going on and where the limits are. After all, civilians are the ones that will be affected.

Mark: Yeah, sounds good! I think this way, it would definitely benefit both the researchers and the public in terms of understanding current state of the art research and its potential of benefiting the world. 😊 We don’t yet understand right and wrong when it comes to state of the art technology, and we cannot consider doing so until we have a voice from everyone.