COA philosophy professor Gray Cox.COA philosophy professor Gray Cox. Credit: Junesoo Shin ’21Since November, the internet has gone into a frenzy over developments in artificial intelligence. Generative AI systems continually surprise their own creators with powers that are improving at exponential rates. They write term papers, legal briefs, diet plans, political campaigns and screenplays. They pass college entrance tests and bar exams at the 90th percentile of human scores. People without any coding skills can provide prompts that generate compelling visual images to order. People in need of therapy can turn to AI for interaction. It is all very exciting – and disturbing.

Programmers do not actually understand the software they have created the way mechanics, for instance, understand the machines they build. These new systems are designed to mostly program themselves using methods modeled, loosely, on evolution and animal learning through reinforcement. The programmers function more like dog breeders and trainers who can guide the process without actually knowing anything about the DNA of evolving breeds or the neural brain systems of creatures they train. The evolving black boxes of code include billions of parameters and formulas that are beyond any human’s power to grasp.

Despite ignorance about these black boxes’ workings and these systems’ side effects, Microsoft and others are plunging ahead in marketing them to millions. They plan to wrap them into search engines like Bing and commonly used apps like Word. And they plan to pay for them, at least in part, with advertising. If funded by advertising, the AI systems that now exist will prove incredibly effective at turning citizens and their deliberative powers into products any businesses and politicians can purchase.

Read more…