Friday, 25 March 2016

Writing Code Frustrations

Microsoft has spent a couple of years working on a project that has a bit of risk tied into it.

It was a Artificial Intelligence (A-I) 'bot' which could talk back to users.  This was sort of a connection into the Twitter world....where you'd send a comment or a picture, and it'd come back with an analyzed thought, and 'speak' a short line or two to you (the original speaker).

They came up recently to realize that in all this coding and planning.....they hadn't really discussed racism or offensive comments with Tay (that was their name for the A-I creature).

Well.....yeah....it reached a point where Tay had to be shut-down because it was getting something, and responding back with offensive/racial slurs.

The Microsoft guys are sitting there now....having to write thousands of lines of more code....to make Tay non-confrontational and less offensive.

So, it makes you wonder....will A-I be no better than humans?  A thousand lines of code to make some computer less offensive?  Ten thousand lines?  A hundred thousand lines?  Eventually, Tay will get smart enough to ask the code-writers why they can't write themselves out of this mess, and why racism exists.  What do you think the code guys will say?

Confrontation after confrontation will occur between the code-writers and Tay.  Computers are striving toward repeating a process that is successful and looking for patterns to repeat the same process.  I think Tay will throw up it's 'computer-arms' at some point and ask what the hell is going on here.

No comments: