This column is not, I promise you, written by AI. I’ve never published anything that is wholly written by AI. I have, however, published work that is helped by AI–sometimes for editing, sometimes for a first draft, sometimes for brainstorming. (It is a great travel planner.)

In the past month, I’ve used ChatGPT to help my large family figure out the most travel time efficient place for us to vacation, to suggest possible places that might complement a trip to Japan, to generate fun ideas for a bridal shower, and to analyze a long legal document for me. 

How about you? Are you using any AIs? If so, for what? Is it very helpful? Somewhat helpful? Terrifying? Of no interest? And how do you think it will shape our world in the next few years?

(And, again, this column is completely AI free!)

 

Similar Posts

0 Comments

  1. There’s nothing intrinsically wrong with AI, if its produced ethically. But it’s not.

    An AI assisted search uses much more energy and water than an old-fashioned search. An AI image even more so. Plus it’s all built on stolen work and jeopardizes the livelihoods of creators. It creates a reputational risk for any organization that uses it.

    So until they pay the authors and artists whose work they stole, and build renewable energy plants and circular water systems for their data centers I will do my absolute best to avoid using it.

    1. I’m 100% with you. AI is already putting a lot of creative people’s livelihoods at risk and in the UK at least, I’m reading that graduate recruitment in some sectors – banking, accountancy, consultancy etc. is down around 50% because the ‘starter’ jobs are being done by AI.

      I’ve already reached the stage where I question pretty much every image I see online. It’s causing massive problems in academia with the potential for cheating, and students are not bothering to learn critical thinking skills as a result.

      I don’t use it and don’t intend to. Someday AI may help find a cure for cancer, but until the people behind it can make money doing that, they’re taking the easy path and screwing everyone over by making them think it’s making their lives easier whereas in fact, it’s just switching off their brains.

    2. It’s a tool more for intellectual thieves than for real people right now. The amount of content from bands that is labeled official but is not, just AI-generated using their voices, is ridiculous. You have to double-check the source of everything, which is annoying.

      1. Yes – there was a thing a few weeks back about that ‘band’ on Spotify who were getting a massive number of plays, which turned out to be totally AI generated. And the world of audiobooks is being invaded by AI narrations. We can’t escape it – which might not be so bad if there were ways to turn it off – you used to be able to get rid of AI search results in Google, but they’ve now made it so you can’t; there’s a little AI circle at the bottom of my WhatsApp screen that I can’t turn off, customer service ‘helplines’ are increasingly using AI chatbots that are not helpful and don’t provide any service whatsoever. I know I sound like a dinosaur/luddite, but I detest the way this technology is being used and I think there will soon be a hefty price to pay. And that’s even before we start talking about the amount of natural resources being used.

        1. If the dinosaurs were still around and could talk to us, they’d probably have a lot of valuable things to say. Like, sometimes bright balls in the sky aren’t just something to wish on, and you should have a contingency plan for when they land! All that to say, I think we dinosaur luddites have been around the block a few times and aren’t just poo-pooing the new but genuinely warning of the dangerous.

  2. I’m pretty leery of it because sometimes it’s flat out wrong, but I’ve successfully used it to write excel formulas for me.

    It’s only as good as whatever work it’s referencing. The classic garbage in, garbage out problem, so if you’re asking it for facts, it’s a mistake to trust it without verifying what it tells you.

    I think after a painful learning period where some big things go wrong because people relied too heavily on it, it’ll settle into being more of a casual tool than the answer to everything like some people think now.

  3. I haven’t used it myself, but an author friend used it to suggest tag lines and titles for a series she is writing.Some of them were useless, but to her delight some of them were great—better than she’d come up with on her own. Since many authors fine tag lines, titles, and synopses harder than writing the book, this is a possible legitimate use for AI.

  4. It is hard to avoid AI when it is the first thing that shows up when you do a Google search. I do read it but I also scroll down to find the Wikipedia entry on the same topic. Microsoft Word offers the service of AI to edit my work which also I don’t use. I am a mixed-media artist and quite tech savvy but I don’t use digital tools in creating art. Years ago, using art software to create art was controversial and was not even accepted as art. Now many artists create art by hand and then scan them, clean them up using art software, make them perfect and shiny and set them up for commercial licensing and reproductions. Many artists now use Photoshop and Illustrator to create art. It is cheaper, quicker and saves time and you can also create beautiful, original powerful art with them. Digital art is now an accepted art form like oils and watercolors. AI, at one level, is doing in a big way what computer technology has been already doing in the last thirty years. Of course, it does not create anything original. And so I doubt that it will replace human ingenuity and creativity. But it can and does deprive creators of original content just compensation for their labor. It can also replace human labor . That’s what Dabney did—replaced a lawyer who studied for three years to get a law degree, incurred thousands of dollars in student debt and spent hundreds of hours studying to pass the bar exam with AI to do the lawyer’s work. It saved her money, I am sure. It probably cost the hypothetical lawyer a few thousand dollars in fees. I am also guessing that the overall stakes were low enough for her to use AI instead of a lawyer. Would she take the same chance if the stakes were really high?

      1. I switched to it a while back although I occasionally forget and use Google! It amazes me that a company that had built up a lo of trust among its users over many years would just throw it away for the sake of something nobody wants. I see, over and over again, examples of how Google’s AI search results are just plain wrong – you can’t trust them.

    1. This was a doc I paid a lawyer for! But it was very long and complicated and I thought I’d understand it better if I had a summary.

      1. Why didn’t the lawyer explain the document to you? I would assume that it would be part of the service for which you had paid.

      2. Let me modify my comments—you had your reasons for using AI to analyze a long legal document. My follow-up question is—was it helpful? Did it do a good job?

        I am not opposed to using AI as an assistive tool. Still, there is that basic question—would AI’s use have the cumulative effect of causing redundancy in labor force, take away jobs, deprive people of their livelihood?

        1. would AI’s use have the cumulative effect of causing redundancy in labor force, take away jobs, deprive people of their livelihood?

          It is absolutely doing that in many areas.

          1. Even before the advent of AI, automation of human services has been going on for several decades—when was the last time any of us have been able to reach a human being on a bank phone line? Physical banks barely have any staff. Can anyone even think of making a career in retail banking? Ever since Expedia came along, travel agencies have been dying a slow death. AI may accelerate the pace or take over services that have been provided by humans right now, but it is not doing anything that is not already taking place.

          2. Yes, but it is doing it at a faster pace and in sectors that have not previously been subject to automation, like the creative arts. And as I’ve said somewhere else here, I’ve read a lot of articles lately about jobs that would once have been taken by graduates as a first step into training in certain fields (accountancy, banking etc.) being cut by half.

        1. This is the conundrum: if the original document was too long and complicated and you needed ChatGPT to summarize it for you, how would you know it summarized it correctly?

  5. Won’t someone think of the lawyers!

    LOL, it’s a brave new world out there.

    *Not a few lawyers and at least one judge I’ve read about have gotten burned by relying heavily on AI to write court filings that quoted made up case law.

    1. Yep, and this keeps happening. People take the easy way out and have generated entire AI books about mushroom gathering — that encourages people to eat poisonous mushrooms. The fact that people can’t even decide what to eat for dinner without asking the Genie In the Lamp is just sad, too.

  6. I find it curious that a website that reviews creative works would be so eager to promote AI (specifically Large Language Models) when devalues creative work and is based on the stolen work of creatives.

    And the not infrequent columns about AI with examples of things you’ve put into ChatGPT are pretty much a promotion (unpaid I assume). I’d feel a lot more comfortable about statements like “this column is completely AI free!” if AAR made a clear commitment to not use AI in any of its reviews and columns, not to review AI generated works, and stopped with the examples.

    AI used to analyse massive amounts of scientific or mathematical data is a reasonable use case, particularly if it’s done using renewable energy and/or carbon offsets. Large Language Models on the other hand are just regurgitating stolen work, and I can’t see any benefit that justifies the theft or the carbon impacts.

    1. Speaking as an individual and not on behalf of AAR, I, for one, can promise that I don’t use AI (and never will) to ‘help’ write my reviews, and that I will never – knowingly – read, purchase or review a book produced using AI.

      1. I think the knowingly part is becoming harder and harder. Companies/generators seem to go to great lengths to hide that the item is false. Recently, I was looking at something where rather than using official the person/computer/faker used officia as part of their poster name. So it should be bandnameofficial@label but the fake used bandnameofficia@fake. You have to read the fine freaking print now.

      2. I would bet that most books published today have, at some point in the generation, used AI. Not to write, per se, but to organized, brainstorm, etc…. There have been writer based platforms in use for years–Evernote, Scrivener, LivingWriter. I’ve spoken to many authors who are using AI to organize and edit their works. Many are using image generation for their covers. When we say we’d never read a book produced using AI, how do we see those choices?

        We’ve talked endlessly here about how poorly paid writers are–if a writer can use AI to edit their book rather than pay an editor (and they are comfortable with AI’s work), is that unethical if it means it enables them to publish more affordably?

        1. What does AI-aided mean, though? I see a difference between Grammarly, which uses AI techniques, and AI. And I will add that Grammarly sucks a lot of times. So I think we would need to neatly define what AI we are talking about before saying that authors for sure all use it.

          I’ll also add that this changes the conversation on plagiarism. If you didn’t generate the work, how can you claim it as original?

          1. I would not say that all authors for sure use it. I would say many. Here’s a study that says almost half acknowledge that they do.

            What does it mean to generate the work? If you wrote a book and had it professionally edited, you had help. If you hired someone to design a cover for you, you had help. I can see an argument that doing those things with AI isn’t ethically different. That is a different issue than its environmental or jobs impact.

            Again, I have many reservations about AI. But I don’t think using AI in any way is inherently an unethical choice.

            As for plagiarism, that is a fascinating question right now. I just had a long talk with my brother in law who is a history professor at a major state school. He said that the university is moving away from papers and take home tests because there’s just no way to limit how much students use AI, not just to write–that’s easier to catch–but to organize, edit, and brainstorm.

            There are several interesting takes on what plagiarism means in the era of AI. It feels like there’s a distinction between generating content and using AI tools

            Is Using AI Plagiarism?

            Plagiarism and Student Use of AI Tools

          2. I’m confused about some of your points. Cover artists aren’t “helping” an author write their work. They are helping with a sales aspect, but for centuries, we have known that authors often contract with publishers to sell their books. I still remember Suzanne Brockmann’s fury over the original Get Lucky cover. Her chiseled, abs-touting ladies’ man became the Pillsbury dough boy in the hands of Silhouette. I could list a lot of horror stories about how cover artists have screwed over authors/publishers, but my point is that cover art and novel plagiarism/book authorship have nothing to do with each other.

            If we are talking about programs like spell check and Grammarly, I don’t think they are relevant to a plagiarism conversation, either. When I went to college, those were the tools we were expected to use. This would be like saying someone who referenced the Chicago Manual of Style was a plagiarist.

            Editors often did that type of work, and since the machines aren’t perfect, it would be nice if they still did (I’m sure we could all offer up examples of books that went to publication with some very interesting errors), but I get that cost savings mean we accept machines taking over that function. Again, I don’t think whether that is done by a human or machine, it would result in accusations of plagiarism or authors not being the generator of the origianl work..

            What I am talking about are things like the Janet Dailey/Nora Roberts lawsuit. At the time, people questioned just how much the scenes ‘lifted’ from Robert’s constituted plagiarism because they were so generic. The same thing was true with the Smart Bitches/Cassie Edwards kerfluffle. There were no legal ramifications to Cassie’s alleged plagiarism. My question surrounding AI is, someone says to Claude, write me a love scene in a barn with m/f protagonists. Another does the same. Do either of them own that work? In such a situation, would the original author do what Nora did and sue?

          3. I’m suspicious of that study because it’s unclear which authors participated in it. It’s not very scientific — after all, it was done by Bookbub. And Bookbub makes its money selling its services to specific types of authors interested in sales and volume. IIRC someone pointed out that because Bookbub did the study, they attracted a lot of authors from very specific groups.

            Some of these authors belong to groups that promote putting out lots of books in a short period of time. And these groups have started promoting the use of AI to make it possible to accomplish these ridiculous goals.

            Most authors don’t belong to these groups. Most readers aren’t interested in books by people who put out 12 or more cozy mysteries a year or whatever.

          4. I can see an argument that doing those things with AI isn’t ethically different. 

            It’s EXTREMELY different. AI ad-hocks and chockablocks other texts, including commercial ones, making it extremely legally tetchy. It’s extremely ethically dubious.
            \
            Plagiarism is plagiarism, period.

          5. At my Other Job, we’re not allowed to use Grammerly on pains of being fired for this very reason.

    2. I’ve said this every single time Dabney brings it up AI in this cutesy, teasing “oh look how cool AI is but I’d never use it” way. This site talks about the importance of original work and opinionated reviews and yet slavishly kowtows to the “wave of the future” that will make human criticism obsolete if we don’t watch out.

      I’d feel a lot more comfortable about statements like “this column is completely AI free!” if AAR made a clear commitment to not use AI in any of its reviews and columns, not to review AI generated works, and stopped with the examples.

      Ditto. How are we supposed to trust this website and any review on it when the publisher keeps touting this nonsense?

      I would be shocked that Dabney used AI models to analyze a legal contract, by the by, if it didn’t reflect other moderation-related choices already made by this website. AI is notorious for its incorrect translations and its faulty advice due to the fact that it can’t parse false information from real information, She’s very lucky if that doesn’t cost her some serious legal problems in the future. The rest of it – party ideas and the like – just reflects the typical inability to make firm choices with forethought and a lack of organization, which also frequently rears its head on this website.

      Oh by the way, Dabney — really? Sticking my other email under a spam filter to force me to wait for comment approval? Really? When I don’t spam this place at all?

      1. I’ve been accused of trolling before but I rarely speak up unless there’s hypocrisy about. I just find it hilarious that the “free speech for all” website decided to filter me. By the by, my message under that email was exactly the same as it is above.

        1. Again, I don’t control spam. I have been on a family vacation and have spent very little time on the site in the past few days.

      2. I do not control the spam. It is controlled by Askimet. I promise I have done nothing to your email. I’m not sure what happened but if there’s something I can fix for you, please let me know.

        1. Good to know. I was quite confused since I’m not trolling, have never trolled and do not plan on trolling. I may be abrasive sometimes when I comment here, but I obviously don’t hate this website or you, otherwise I wouldn’t spend my time here. I may want it to be better but there you have it.

          I have no idea what happened either. I used the same email I always use, and the message was identical to the one I posted above which didn’t trip the spam filter.

    3. Like Caz, I can’t speak for the site, but I have never used AI to write my reviews. I do use spell check, but that’s mostly for typos. To the best of my knowledge, I have never reviewed an AI-written book. I’ve only written one column lately (fan fiction), and that was not AI-generated.

      1. I’ve never read an AI generated book that I know about either. But I’m sure many of the books I’ve read in the past year were, at some point, aided by AI.

    4. Adding my pledge that I will never use AI in any of my writing, and will take a strong stance if it is ever used in machine training for AI.

      1. I have, since I became aware of AI, put AAR on a list of do not steal lists. We have software that is supposed to prevent it from happening. But I’m not convinced even those efforts stave off the LLMs.

          1. Many are suing AI LLMs. As I said, it’s the largest intellectual theft in history if you ask me.

  7. I avoid it as much as possible – I’m actively discouraging things like my bookclub members from using it to generate questions and comments. The huge issue that I don’t think is getting enough airplay is the amount of power that is used to make it work. I gather AI was expected to rationalise its power use by now and it hasn’t. Certainly in Australia AI will affect our baseload of power, and any transition away from coal/gas to more sustainable sources hasn’t factored in the amount of power needed for widespread use of AI. This is a little bit terrifying, and not easily solvable once a new technology is out in the wild.

    1. This pisses me off about technology in general right now. It is sent out as soon as possible to maximize profits, but it is rarely thought through. In fact, a lack of thinking things through is at pandemic levels right now, and I have a feeling AI is only going to make that worse.

      1. Agreed. Nobody appears to have given any thought as to how all those people who now and in future can’t find jobs thanks to AI are going to be able to buy any of the things those companies are selling, let alone eat and pay bills.

        1. The clear hope is to drive people out of tech and back into the factories and fields to replace immigrant labor. There’s a delusion that if we try hard enough we can go back to 1940 or 1950 and replace all of our manufacturing needs with domestic labor. It ain’t happening.

        2. Many think the job displacement caused by AI will lead to UBI of some sort. Even the WSJ today writes:

          The spectacular costs associated with AI will force a debate on the sharing of its profits. The wealthy and powerful who own the AI companies won’t like that. But those who wished and failed to see the social media companies declared a public utility 10 years ago, and who drew support from the populist left and the populist right—they would like that a lot. 

          1. I remember 20, 30 years ago, when the talk was about how automation would make life easier in so many ways and how people would all have so much more leisure time as a result. Nobody ever asked the question – hang on, if all these people can’t work, how are they going to pay for stuff? How are they going to buy all these things being produced by automatons?

            And nobody seems to be asking it now.

          2. In the US, the top 10% of earners generate 50% of consumer spending. Increasingly business is seeking the dollars of the uber wealthy rather than those of the majority. Again, from the WSJ:

            The top 10% of earners—households making about $250,000 a year or more—are splurging on everything from vacations to designer handbags, buoyed by big gains in stocks, real estate and other assets.

            Those consumers now account for 49.7% of all spending, a record in data going back to 1989, according to an analysis by Moody’s Analytics. Three decades ago, they accounted for about 36%.

            All this means that economic growth is unusually reliant on rich Americans continuing to shell out. Mark Zandi, chief economist at Moody’s Analytics, estimated that spending by the top 10% alone accounted for almost one-third of gross domestic product. 

            One can see a future in which only a few companies seek the dollars of the majority. #Notgood

  8. As I’ve written before, I think AI will be ruinous for the job market, our mental health, and the environment. It is also, as many here have commented, it is based on the largest intellectual theft in history.

    I also think it is here to stay and will become a bigger part of our lives, especially here in the US where our administration has evinced no interest in putting in any guardrails. (Neither did the last administration for that matter.)

    Does that mean no one should use it? That is not a stance I take although I respect those who do. For me personally, most days, I can barely keep up with all I have to do. I pay no one to help me–and no, I don’t have a housekeeper, etc…–and I wear many hats: in my family, in my community, here at AAR, and, until recently, at my other job.

    AAR is a huge time commitment for me and while I very rarely use AI here–I have asked, ironically, for prompts for the ask because sometimes I struggle to think up new topics week after week–I am using it to help me be better organized, to help my family with all sorts of planning, and to summarize the phenomenal amount of information I need to make sense of to meet my responsibilities. Even so, I feel overwhelmed and my energy level is not, thanks to cancer and age, what it used to be.

    I do, however, regularly petition my elected officials to put guard rails on AI and I work with the younger generation to understand its limits and its lies. As I said, I fully support all those who shun it. But I do not and I am at peace with that. I have followed AI and its development for years and, like so many things, I do not feel that, for me and my children–they have no choice but to use AI in their jobs if they wish to keep them–never using AI is an option.

  9. There’s a writer who primarily writes about animal welfare who has put together some context on LLMs and energy/water usage. His takeaway is that, while generated video may consume a disproportionate share of resources, the average person is using up a lot more energy by streaming video than by prompting LLMs. So I feel more self-conscious about my tendency to binge Fascinating Horror episodes while making dinner than I do about my use of large language models.

    I’ve played around with Claude a bit (I trust Anthropic slightly more than Open AI or Google/Alphabet, and a lot more than Meta). I don’t feel like I’ve gotten anywhere near its capabilities, though. If I were to experiment with it more it would be in my non-fiction job. For writing fiction I write all on my own, but when I finish the current manuscript I’ll run it through ProWritingAid, to check to grammar/spelling errors and also the kind of quantitative analysis that an LLM can do better than I can do (if I tend to use the same phrase over and over, for example). I might eventually use Claude to brainstorm marketing ideas, or, say, give it a book blurb, ask it to assume a reader persona, and see what feedback it gives me.

    I like Adam Mastroianni’s metaphor of LLMs as “bags of words“, which gives you an idea of where they work well and where they don’t. It can put words together, but it can’t make new thoughts (or new words) out of the bag. (This is why I think “AI” is a bad label — it may be artificial, but it ain’t intelligence — and why I’m also not at all convinced that self-teaching LLMs are right around the corner.) The cases I’ve seen it act most impressively are where the bags of words contain instructions, i.e. “Take this data for me and format it into a table,” which would take me ten minutes in Excel and takes Claude seconds, or where the bags of words contain low-level information in a field I’m not familiar with; I had a whole conversation with Claude about which houseplants I could grow in my basement with artificial light.

    One thing I’ve noticed: a lot of people I’ve discussed AI with seem to react to it with a certain visceral disgust. And somehow these people are almost exclusively women. I’ve seen (mostly, not exclusively) men react with “AI is bad” or “AI is overhyped” or “AI is leading us towards certain destruction,” but the disgust isn’t there. And I can’t tell you whether this disgust is good or bad, justified or not; I can only tell you that I… don’t feel it. I don’t know why not. Maybe because I’m autistic?

    But I imagine that, to someone who does feel the visceral disgust, talking about the best use cases for LLMs is worthless. It would be like discussing when to let your pet spider explore safely outside its habitat with someone with arachnophobia.

    1. Experts agree that they don’t know how deep learning works, which is the primary tech behind AI. That’s why companies wind up giving refunds on AI mistakes, which created a ridiculous sale off of the prompts they gave it, or why a racist rant is made when that was (allegedly) far from the original programming intent. I think one of the big problems with these conversations is that we are talking about all different kinds of AI and uses of AI as being the same thing. For example, I recently used Google to find out what holidays are based on books. It gave me both an AI summary and various websites where humans had made lists (probably themselves aided by some sort of search engine). I don’t think the world would have ended if I had used the AI list versus the websites, but I preferred the sites. However, that kind of function for AI makes sense to me.

      My husband and son recently went to Japan. It was my husband’s third trip, and he hired tour guides and translators for portions of it because translation programs don’t work perfectly with dialects, and often it takes you longer to communicate what you need than people have patience for. The human guides can direct them to better sites to see and places to eat because they’ve actually been to the restaurants and aren’t relying on internet reviews that friends/family can inflate. I’ve heard people complain that following the internet around on trips can get you a very generic experience, where you wind up seeing only commonly known areas and shop at tourist traps. He had some transportation and medical issues that required human aid, too, which made those folks well worth the small fees he paid. Machines are great at giving you plane schedules and timetables, but not so good at a lot of other things that make a trip memorable.

      I could do a blow-by-blow of the bad medical advice (sometimes catastrophic) that people have taken from AI, the garden my husband did with AI assist that was a bust, or the truly abysmal writing I’ve seen as a result of it, some truly giggle-worthy. I’m making a hodgepodge of this but my points are simple: Lots of the tasks people assign it come out with mediocre results and they wind up spending thousands on a trip with AI assist that is a cookie cutter experience of everyone elses and that is far less satisfying or immersive than it could have been simply by adding a few humans. Or a garden that is less awesome, natural, and easy to maintain than a half-hour conversation at the local garden shop would have yielded. (All the plants I’ve done that with have thrived; my husband’s scientifically sound AI garden is dying.) The disgust may well come from women placing greater value on human interaction and also realizing that it is unwise to trust high functions to things that their own creators don’t quite understand.

      1. I’m not entirely convinced as to the last point. There have been a number of scientific breakthroughs that happened before theory caught up to them — for example, I’m pretty sure Edward Jenner couldn’t explain how cowpox inoculation saved people’s lives. Sometimes it really does make sense to invent first and ask questions later!

        I was thinking about the garden-shop example after I posted. When I was pinging Claude with questions, I was standing in a shop a couple miles from my house. Let’s say, off the top of my head, that there were 500 different plants on site, each with their own relationship to water, the local soil, and sun. The shop is part of a regional chain that has a decent reputation for the employees knowing their stuff, but is also known for giving teenagers summer jobs — my friend’s kid worked at a different branch for a couple summers, and I don’t think he knew much about plants when he was hired. So let’s say that Claude’s answers to me (I got two pothos — pothoses? — and a ZZ plant, and for the record, they’re all fine) were not as good as the folks at your garden center, who have deep knowledge, but better than the answers my friend’s kid, after a week of work, would’ve been able to give.

        The presumed risk is that Claude crowds out the people with deep knowledge–or, to put it another way, that that deep knowledge won’t be as valued. Fair enough: so then how do we value it? Obviously one possibility is that the person with deep knowledge charges an extra fee for access to that knowledge, just like the translators and other aides your husband hired did. That’s not the expectation now — I don’t know your garden center, but I suspect that at the one I visited, a person who’s worked there for years and years and knows the plants inside and out is probably not paid that much more than my friend’s kid was, and is expected to find some compensation in getting to stand around talking about plants. In the absence of Claude, I’m still supposed to be able to access this person’s deep knowledge for free (since I could’ve walked out and not bought any plants!). Whereas if I had entered with my questions and the person with deep knowledge had said, “I can explain to you exactly how to set things up, but my consulting fee is $40 per half-hour conversation” — that would be absolutely fair on their end, given the years of work they put in to acquire their deep knowledge! But I probably would’ve just not bought any plants, then.

        You could argue that the situation we’re in now, where the garden-center employee is undercompensated for their deep knowledge and I can use Claude, is the worst-case scenario. I’m just not sure that banning Claude and its like actually does the garden-center employee any favors.

        1. Your use of vaccines as an example is interesting. I read a book before 2016 on vaccinations written by a medical doctor, and he discussed how the inability to explain to the public how and why they worked was starting to have some serious repercussions on how parents responded to them. This was around the same time Jenny McCarthy and Jim Carey were appearing on Larry King to talk about the dangers of inoculating your children. Sometimes, being able to do a better job of explaining to people what you are doing and why can prevent some undereducated, valueless moron from becoming Secretary of Health. Trust me, I’m a doctor doesn’t fly with a lot of people anymore.

          Regarding the garden center, going to a location with trained staff makes a difference. I’ve lived in my area a long time and know where to go, but I would rely on some deep research if I wanted to pick something up at Home Depot or Lowe’s, for example. When I’ve watched YouTube for some do-it-yourself project, they will often recommend going to a place that knows their product if you yourself aren’t an expert on what you need. You get what you pay for. And if you are after something cheap and easy, Claude might be a great tool for that, but if you have tricky soil and a volatile, finicky climate to deal with, generic opinions might not be your best option.

          I am not sure what the solution is, but I am fairly sure that just continuing forward in the manner we are now is not it.

          1. I definitely agree with you, that you get what you pay for! But the interesting thing about the garden-center example is that you didn’t pay for that expertise, at least not directly. You could have just as easily had a half-hour conversation with a gardener friend and then spent five minutes in the store, and you would’ve been charged the same for your plants. And, moreover, the store owners have a limited amount of control over the demand of their customers for expertise. So it can be hard to justify paying an employee extra for their expertise if you can’t point to that expertise leading to any increased revenue. Which to me raises the question: has the garden-center employee’s expertise been undervalued all along?

            Of course, the garden-center employee’s compensation might be more than just money — they might like setting their own hours, say, or take pride in being able to show their knowledge to customers. But “we won’t pay you a lot, but you’ll love your work” can be a risky road to go down, I think. (And that problem predates AI: think of all the Borders or Radio Shack employees, say, who had deep knowledge, and whose post-bankruptcy resumes said “store clerk.”)

            By contrast, when your loved ones traveled to Japan, there was more obvious market segmentation: some people who didn’t want to (or couldn’t afford to, on the margin) hire translators and local guides had the more crowded “touristy” experience, and your loved ones were willing to pay the extra premium for human deep knowledge. I wonder if one of the consequences of LLM use will be that it’ll become more common to distinguish between shallow but easily available knowledge, and deep contextual knowledge that has to be paid for. This is just one example, but a pattern-making company I respect a lot recently announced on their mailing list that they’re withdrawing from Instagram and starting an in-house zine.

            (But, pursuant to Dabney’s comments, I fully expect there to be a race between crawlers and websites for a while, just like email companies fought a war against spam in the 00s.)

          2. I don’t know if the employee was compensated, but I did pay a premium for going to that garden center. The same item at Walmart was about 25% less. Given that I don’t see any teens at the garden center, and I do year by year see a lot of the same people, I think something is happening to keep them employed there. I go there, in fact, because even other locally owned shops employ young people who will give you that Gen Z stare and a flat, “No idea how these grow. Should be on the label.”

            I doubt that it will be easier to distinguish between shallow and deep contextual knowledge in the future. Rather, I should say that I doubt that it will matter. In our opinion-driven rather than expertise-driven culture, where a thing like “alternate facts” exists, it is no longer relevant what proof you bring to the discussion. The other person is simply going by their visceral response. If their Uncle Bob survived an accident where he wasn’t wearing a seat belt, what more proof do you need that seat belts are unnecessary? As one man told me, the minute he got the COVID vaccine, he got COVID. I asked if he’d travelled, and he said yes. In fact, he got the vaccine because he was taking a cruise where they wouldn’t let you on if you didn’t have it — but that shouldn’t have mattered (per him) if the vaccine really worked.

            Your pattern company is going retro. It will be interesting to see if that works. In the 90s, everyone had an in-house newsletter. A company I worked at produced one until the mid-2000s, when social media took over.

  10. I try not to use it. I have a brother who works for Data Annotation – he trains AI chatbots. It’s a good gig at $40 USD an hour and he can work from home and pick up hours as he wants. My son has often used ChatGPT to check his math homework (grade 11 algebra). But I don’t like that AI has been trained on authors’ and creators’ works without their permission, and I have issues with the power and water needed to run AI.

    1. That article relied a lot on the word “should”. To quote: We should instead avert the security threats from AI by building technology that defends us. We should accelerate human-boosting AI over human-automating AI.We should build technologies that let regular people train their own AI models, run them on affordable hardware, and keep control of their data—instead of everything running through a few big companies.

      Remove the should, and you have what is actually happening. AI is a security threat. AI is human-automating (replaces humans). AI allows all our data to run through a few big companies.

      The narrative of that piece believes that “we” (I’m assuming that means the masses) can stand up to the powers behind AI (large corporations and governments) to require them to change. Don’t know where they are living but I sure don’t feel very in control of any of that where I am.

  11. I keep thinking about R.U.R., the 1920 play by Karel Capek. It’s about Robots, and they take over the world. If AI can figure out how to make robots to keep the machinery going, do humans have a purpose?

Leave a Reply to Lisa Fernandes Cancel reply

Your email address will not be published. Required fields are marked *