OF THE MANY insidious consequences that accompany the use of artificial intelligence via ChatGPT (environmental impact, job reduction, general end-of-the-world vibes), there is one aspect of it that alarms me the most: it creates the illusion that in crowdsourcing innumerable opinions from an amalgamation of strangers across time, human beings will be able to express themselves perfectly.
But, by and large, the use of ChatGPT is cloaked in the dark undercurrent of what I would argue are two downright dangerous moral problems: 1) the underlying assumption of AI perfecting human expression is that humans are no longer up to the job of speaking for themselves, and 2) in bypassing the time, effort, and experience it takes to arrive at what you personally think about a given topic (and therefore how you would say it), it not so slowly erodes character, integrity, and original thought in the process.
or those relationships in my life that have been slowly cultivated through navigating disappointment, grief, and life’s endless slings and arrows. A machine will not be able to give me advice that takes into account the context of my life; a machine does not understand my specific weaknesses, proclivities, or excesses. But my friends do. These are the people with whom I have made “more tracks than necessary, some in the wrong direction,” as Wendell Berry would put it. In community, you are forced to reckon with what you actually think by knocking heads against each other and disagreeing, by having long conversations about the same thing over and over again, by failing, by thinking you think one thing only to have life teach you another.
