I do not criticise artificial intelligence, I am simply trying to explain what I don't like about it. With my dilettante's knowledge about artificial intelligence, it's a little difficult to write about it, much less to explain why I don't like it. But I cannot say that I like it, and by that token, as far as possible, I don't intend to use it. In some ways, saying that aloud makes me feel satisfied. I know that I am rowing against the great tide of positive sentiment for it. I know that it makes me a sort of old fart. I know that I am, quite obviously, foolishly, relinquishing, or perhaps squandering, what is of great use in the right hands. But these are not my hands, they won't be my hands. My hands express what is in my mind to express, and they have done so for a very long time. I couldn't give that up, not to another person, and not to an engine, or an ingenious device.
Which is not to say that I hate it. You might as well hate the steam engine. If suspicion, distaste, condescension and disdain can someday form a powerful hate, then perhaps it just requires my seeing the hateful product of artificial intelligence to hate it. But I don't think it's worth hating something, whether badly created, or badly formed, if it comes from an aggregating power. Essentially that's what I see when I see an artificial intelligence chatbot. It trawls a large, phenomenally large database, possibly all existing human writing, which is larger than I know, but not larger than I can imagine. Then it forms an answer to a question. How it does so, is the ground meat in the sausage. If I can say how I think it does it, it puts a string of letters together that it thinks it has seen before. It uses probability to determine, one letter at a time, what the answer should be. Then it checks the total answer. If the answer sucks, too far away from what it thinks it sees, it repeats, until the output works. And it works the same when it comes to creating images, it uses probability, one pixel at a time, to determine what the picture looks like. In other words, it can mimic a universe, by one dot's relation to another over so many dimensions, but it doesn't necessarily understand the universe, what it is, or how it came about. That takes a bit more unpacking. That's the impulse of humanity's desire for knowledge.
I think doubting artificial intelligence is a fool's game. Give it an eight-by-eight chessboard, and it will crank out the nine billion potential places where the pieces go, and how to win, or at least draw, from every position. Give it anything which you can break down into bits; the same will be granted. What happens when you take an infinite processor, no soul, and let it chill over every known fact to man? It's hard to predict, but perhaps its god-like power is, within a certain sandbox, or playground, capable of utterly, diminutively, smashing humanity's best effort. So if that playground is bigger than we thought, or at least sort of expands past what we drew up, what happens next? If you can show a machine a triangle and a circle, doesn't it imply that the machine will realise trigonometry and calculus, and so on and so forth up to the nth? The playground is then as large as our reality. Well, we have to lose, I suppose. Our much-vaunted intellect simply becomes too slow, too small to matter. In a machine's mimicry, it can outgun originality.
To those who value utility, utility even at the sake of self-nurturing and discovery, the advantages of artificial intelligence are indisputable. Because, I think, that is the cost of using something so useful. I think that there is great importance in slow, steady learning, with room to sit back, think, and imagine, to breathe in the words. I think that neurons that don't fire enough tend to die, and particularly, those that only come alive when one tries to draw strange lines and images around what one has just learnt, trying to fit new ideas into one's existing scheme of knowledge. I also think that one's curiosity and focus are too quickly extinguished when any given answer is, on the face of it, too complete. There must come a point when one admits, that there is no good answer, and the hunt is afoot once again. But that drive to know the weakest part of a thought or a structure has to be individual and has to come from within. Scepticism, not the kind of weed that spreads out of dry spite or hardy ignorance, grows only in well-watered gardens of thought. And out of that buds rationality and intuition.
Why, after all, write when something can write for you? I think that's the great question, isn't it? Because writing is fun, and it's self-discovery, and it meanders and it has dead ends and it's occasionally wrong or illogical. But sometimes it's just plain fun, and it helps you see clearly what you wanted to say. Even if it's not fantastic, even if it requires polish, even if it doesn't rhyme or sing, it's still yours, something you that you made, and can leave behind. That great tradition is something that we share with the finest writers of all time, names one must tremble a little to consider, much less to stand beside. Really, to give that up, to a machine, just seems to me a little vulgar.