I have already written at length about my neck and shoulder pain, for which I am working with my doctor, a physiotherapist, and a massage therapist to treat. I’ve also had an ergonomist come and do an assessment and adjustment of my workstations at my employer, the University of Manitoba (I’m still waiting for his final report, with a shopping list of equipment which will be purchased to help me get through an eight-hour workday without pain). I am still very much in the process of learning which actions are detrimental to the couple of deteriorating cervical joints in my spine, and which are more beneficial!
For example, you would think that having the extra weight of a virtual reality headset on my noggin would make things worse. However, I have been astonished to discover that my neck does not become as sore, as quickly, when I am using the Mac Virtual Display feature on my Apple Vision Pro, along with my MacBook Pro at work!

Therefore, I have been working 3 to 4 hours a day like this, as opposed to just using my MacBook Pro with an external monitor attached. The ergonomist did set me up with a temporary notebook riser, adjusted so that I am not hunched over the keyboard, and aligned so the top of both the MacBook Pro screen and the external monitor are both at eye level. I find that working like this, without my AVP, my neck and shoulders still start to ache after about two hours, and I have to stop, take a break, go for a walk, and do some of my physiotherapy exercises. As I mentioned earlier, this is a learning process.
On Wednesday, at lunchtime, I got up from my MacBook Pro, unplugged my Apple Vision Pro from its battery charging cable (I tend to leave it plugged in when I am working seated) and, while still wearing my AVP, went to the washroom. My coworkers in the library are already well-used to seeing this strange person wandering around with a VR headset on, and my vision while wearing it is almost as good as it is when I wear my glasses, so I often do this if I have to make a short walk to the printer, or in this case, the washroom.
However, on my way back from the washroom, disaster struck. I accidentally got the cord between my Apple Vision Pro (on my head) and its battery (sitting in the front left pocket of my pants) caught in a metal part of the door to my office cubicle space when I was coming back in from the washroom. My AVP is okay, but I wrenched my already-painful neck badly, and as a result, made a bad situation even worse. (Lesson learned; you need to take that damn power cord into account when moving around!)
As a result, I have been off sick from work for two and half days this week, spending a lot of my time either lying in bed or lying on the sofa. On top of that, we have had not one, but two Alberta Clippers roar through Winnipeg on Wednesday, Thursday, and Friday, so I have been apartment-bound as well as largely bed-bound. I just find it ironic that the very thing that seems to make my pain more bearable (the Apple Vision Pro) can also make it more severe! This has just not been my week.
Anyway, this is my usual off-topic preamble to the real purpose of today’s blogpost. I had promised that I would share with you, my blog readers, the artificial intelligence presentation I had been researching since this summer, which I have recently delivered to three separate audiences: University of Manitoba graduate students, graduate student advisors, and the professors and instructors in the Faculty of Agriculture and Food Sciences (the latter group for whom I am the liaison librarian, and from where the original request to create and give this talk was made by the chair of the agriculture library committee, many months ago). And while this talk was overall very well-received by my audiences, I did receive some negative feedback, and I wanted to talk a little bit about that as well. AI is a divisive topic in an already-divisive age.
I’m going to share an edited version of my PowerPoint slide presentation, with some University of Manitoba-specific bits removed, as well as any contact information removed (sorry, the UM faculty, staff, and students have the right to call on me with questions after my presentation, as I am their liaison librarian; you don’t 😉 ).
Also, I will be transparent about how I used generative AI tools in creating this PowerPoint presentation. I currently have paid-for (US$17-20 a month) accounts on three general-purpose generative AI tools: OpenAI’s ChatGPT; Anthropic’s Claude; and Google’s Gemini. These are the “top three” general-purpose generative AI tools currently recommended by Ethan Mollick (more on him later in this post). Do I plan to keep paying for all three? No. But I have found it highly instructive to enter the exact same text prompt into all three tools, and then compare the results!
In addition to conducting my own research into artificial intelligence in general and generative AI in particular, I used both ChatGPT and Claude to do additional research into this topic, some of which made it into this presentation. I also had a lot of text-heavy slides in the first draft of my PowerPoint presentation, so I asked Google Gemini to provide suggestions on how to reformat my slide presentation to have fewer bullet points per slide (which I think it did a pretty good job at).
I also did try to ask both ChatGPT and Gemini to redesign the theme and design aspects of my PowerPoint slides, but I was extremely unsatisfied with the results, despite several attempts, and I finally gave up on using AI for that task. So please keep in mind that generative AI (which I will refer to as GenAI from here on out) can still fail miserably at some tasks you put it to work on!
Here is my PowerPoint slide presentation, complete with my speaker notes, for you to download and use as you wish, with some stipulations. I am using the Creative Commons licence CC BY-NC-SA 4.0, which gives the following rights and restrictions):

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
This license requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only. If others modify or adapt the material, they must license the modified material under identical terms.
BY: Credit must be given to you, the creator.
NC: Only noncommercial use of your work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation.
SA: Adaptations must be shared under the same terms.
(The tool I used to determine the appropriate Creative Commons licence can be found here: https://creativecommons.org/chooser/.)
So, here is my PowerPoint presentation (click on the Download link under the picture, not the picture):

In addition to sharing my slide presentation with you, I wanted to highlight a few resources which I discussed within it, which you might find useful. These are books and websites which I used as I worked my way up the learning curve associated with AI in general, and the new wave of GenAI tools in particular.

I start off with a bigger-picture look at the whole forest of artificial intelligence, later narrowing my focus to look at GenAI tools, a new subset of greater AI. First, a really good layperson’s guide to GenAI is a 2024 book by Ethan Mollick, titled Co-Intelligence (see image, right). One thing I want people to remember is that the new wave of GenAI tools only dates back to 2022, when the capabilities of these new tools (ChatGPT, DALL-E, Midjourney, Stable Diffusion, etc.) first captured the general public’s imagination, and stoked their fears. There are lots of published books about AI, but if they were published before 2022, they won’t cover the part of AI that is making the most noise right now. Also, keep in mind that any print/published book will soon be outdated, because the field of GenAI is evolving so rapidly!
Ethan does a good job of covering the territory, and I share with you his four rules of AI:
•Principle 1: Always invite GenAI to the table. You should try inviting AI to help you in everything you do, barring any legal or ethical issues, to learn its capabilities and failures.
•Principle 2: Be the human in the loop. GenAI works best with human help; always double-check its work.
•Principle 3: Treat GenAI like a person (but tell it what kind of person it is). Give it a specific persona, context, and constraints for better results. For example, you’ll get better results from the detailed prompt “Act as a witty comedian and generate some slogans for my product that will make people laugh” instead of the more generic prompt “Generate some slogans for my product.”
•Principle 4: Assume that this is the worst GenAI tool you will ever use. Generative AI tools are advancing and evolving rapidly.
Second, I want to share with you an online course from Anthropic, the makers of the GenAI tool Claude. This course, which I worked through this summer, is called AI Fluency: Framework & Foundations, and you do not need to use Claude to work through the exercises—you can use any GenAI tool you wish. The focus of this 14-lecture course is to learn how to collaborate with GenAI systems effectively, efficiently, ethically, and safely.

One of the concepts taught in the AI Fluency course is what Anthropic calls the four D’s: the four key competencies of AI fluency (they seem to be big on alliteration!).
•Delegation: deciding what work should be done by humans, what work should be done by AI, and how to distribute tasks between them.
•Description: effectively communicating with AI tools, including clearly defining outputs, guiding AI processes, and specifying desired AI behaviours and interactions.
•Discernment: thoughtfully and critically evaluating AI outputs, processes, behaviours, and interactions (assessing quality, accuracy, appropriateness, and areas for improvement).
•Diligence: using AI responsibly and ethically (maintaining transparency and taking accountability for AI-assisted work; an example of this is when I described in detail which GenAI tools I used, and how I used them, in creating the PowerPoint slide presentation, earlier in this post.)
Finally, I share with you what I found to be a very helpful guide prepared by a librarian, Nicole Hennig, about how to stay on top of the rapidly evolving and accelerating field of GenAI. You can obtain a copy of her 2025 guide here. This is as good a place as any to start working your way up the learning curve (as I first did, with the 2024 edition of her guide). Nicole offers a bounty of valuable tips, tricks, suggestions of people to follow, and advice on how best to keep up with the roiling sea of change which is currently taking place in GenAI!

Finally, I wanted to talk a bit about the divisive nature of GenAI. AI/GenAI seems to be a very polarizing topic, especially in the field of higher education! While I did try to present a balanced viewpoint on generative AI tools, talking about both the good and the bad, I did receive some feedback from a few people who felt that my presentation was too…positive? And that, despite the warnings in my talk about some very serious problems with GenAI tools, I had neglected to portray GenAI’s more negative aspects in a more forceful way.
For example, one agriculture professor, in an email after my talk, said this about the Anthropic online course in AI Fluency, a learning resource which I had mentioned in the previous section of this blogpost, as well as in my slide presentation:
…I know you were recommending the AI class that was created by Anthropic, and how it is agnostic to the AI used, and just a good introduction to use. I’ll admit that I have not taken the course (I am now intrigued and will try to), but I couldn’t help thinking when you introduced it, of courses on appropriate opioid prescribing practices made by Purdue pharma.
Ouch. Fair point, but painful comparison (and I say that as someone who is now actually suffering from physical pain, as I stated up top). So I wanted to end this blogpost with a brief discussion about how some intelligent but more skeptical observers are responding to the tidal wave of GenAI tools washing over society as a whole, and share links to some criticism, as part of providing a larger perspective. I will be the first to admit that I am not an expert in this field, despite what I have learned since this summer! I am a librarian with a computer science degree, which made it easier for me to comprehend some of the more technical aspects of what I was reading, but not as good at the philosophical part of the discussion about GenAI.
The professor who commented on the Anthropic course above shared with me a couple of links to recent critical articles which I, in turn, will share with you. The first link is an Open Letter by 17 scholars, warning about blindly accepting GenAI tools in higher education (post-secondary education, i.e. colleges and universities, although obviously many of the same arguments could also be made about K-12 schooling):
Guest, O., Suarez, M., Müller, B., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the Uncritical Adoption of ‘AI’ Technologies in Academia. Zenodo. Retrieved Dec. 19th, 2025 from https://doi.org/10.5281/zenodo.17065099
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
The second link is the text of a recent talk by the well-known intellectual author and gadfly Cory Doctorow, who gave his university audience a foretaste of his book on AI, which will be published in 2026:
Doctorow, C. (2025). Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI. Retrieved Dec. 19th, 2025 from https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington
Over the summer I wrote a book about what I think about AI, which is really about what I think about AI criticism, and more specifically, how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.” I titled the book The Reverse Centaur’s Guide to Life After AI, and Farrar, Straus and Giroux will publish it in June, 2026.
But you don’t have to wait until then because I am going to break down the entire book’s thesis for you tonight, over the next 40 minutes. I am going to talk fast.
And both Cory Doctorow, and Olivia Guest et al., make some seriously valid points about the negative consequences of a heedless, thoughtless, headlong rush into adopting GenAI tools. Now, you can decide, after reading all this, that you will have absolutely nothing to do with AI and GenAI, and that’s a valid position to take. But will it change the fact that GenAI is already being incorporated into software we use every day? Can the genie be pushed back into the bottle? Doubtful.
So what I am saying is: learn how the enemy (if you see it as “the enemy”) works. Spend a bit of time to become familiar with the GenAI tools, try them out on certain tasks, and see for yourself where and how it succeeds at a particular task, and (more importantly) where and how it fails. I have had some amazing results from using GenAI tools over the past eight months, but I have also experienced situations where I walked away thinking, “this is garbage.” But may I gently suggest that the only way to gain the experience which informs your opinions is to actually use the tools, and not to stick your head in the sand, and refuse to have anything to do with them.
Are we the unwitting and unwilling beta-testers for these products, as they are rolled out and embedded stealthily in products we already know and use? Absolutely. Will there be negative consequences, some foreseen, and others unexpected and unanticipated? Absolutely. Will there be some tasks which GenAI does and does well? Also, yes, absolutely (and it is already happening based on my own experience). All three things can be true at the same time. Like all technology throughout human history, artificial intelligence is a double-edged sword. It can harm as well as heal.
I still think that the best stance on GenAI is to be a skeptical but informed user of the tools (even if you limit yourself to the lesser-powered, free versions). Also, you owe it to yourself to read a variety of viewpoints on the technology, from a range of sources (start with my fellow librarian Nicole Hennig’s excellent guide which I mentioned above, plus my skeptical professor’s two links, and work out from there).
Above all, even with how divisive AI can be as a topic, now is not the time to be locked into either a rigid AI-is-bad or AI-is-good perspective, because both are true at times, and we need to hold space for that unsettling and upsetting fact. And we need to brace ourselves, both personally and as a society. because (as I have stated before on this blog), things are about to get deeply, deeply weird before all this is over.

Related Blogposts
Discover more from Earlybirds Invest
Subscribe to get the latest posts sent to your email.


