Friday 24 November 2023

Sky-Net World?

 I sat and read the text today: "The Pentagon is currently moving toward allowing AI weapons to autonomously make decisions to kill humans."

So the jest of this?  Hank and Jimmy....contractors for some company, are simply writing code that says under these conditions....if some Army or Marine unit comes under fire....'master-control' without any human input will seek to use all assets under it's control....to save the unit....without a human mind/authority connected.  It'll just be automatic.

Problems?  I just can't see any President allowing such a mechanism to exist, and making decisions without White House 'voting'.

Good thing or bad thing?  I tend to think it's a positive thing....removing some stupid Army or Air Force general....from making a crappy decision.  If you say such-and-such unit is taking heavy-fire, and you have ten drones in the air that could bring massive firepower to the situation within twenty minutes....let the AI do its job.

If AI had been active on 9-11?  It would have sorted scenarios in sixty seconds, then gone to using drones to intercept all passenger planes and damaged each craft enough that they would have crashed well before they came within range of the towers, the Pentagon, or that field in Penn.

Yeah, I'm probably one of the few who completely trust AI to do the right thing, and make logical assumptions based on the situation.