Available for Healthcare IT roles — Clinical Informatics · EHR Implementation · End User Services · Project Coordination Hire Me

AI Tools and Models: What I Learned from the CompTIA AI Essentials v2 Course

By Eric Andrew Kristof, RN | March 2026


I recently completed the CompTIA AI Essentials v2 course.

Among the course material was a section covering the current landscape of AI tools and models — what they are, what they’re built for, and how they differ from one another. I’ve expanded on that material below, adding context from my own time working with several of them directly.

Before diving in, there’s a distinction worth keeping in mind: the tool you interact with and the model powering it underneath are two different things. When you open ChatGPT, you’re using OpenAI’s interface — but GPT-5 is the model doing the actual work behind the scenes. Companies are improving both layers simultaneously, and in many cases a single tool will quietly route your request to whichever model is best suited for the job. You don’t always see it happen. It just does.

That might sound like a minor technical detail. But understanding it changes how you think about choosing between tools — and what to expect from them.


General Purpose AI Tools

These are the versatile everyday assistants. Writing, summarizing, coding, researching, brainstorming — they handle the full range. Think of them as the Swiss Army knives of the AI world.

ChatGPT (OpenAI — Model: GPT-5, August 2025)

ChatGPT is the tool most people picture when they hear “AI assistant,” and for good reason. It runs on GPT-5, a unified model capable of handling text, code, images, and real-time voice in the same conversation. OpenAI recently removed the manual model-switching that used to require users to choose between a general model or a deep reasoning model — the system now analyzes your prompt and picks the right one automatically. For most everyday tasks, it’s still the first tool people reach for.

Google Gemini (Google — Model: Gemini 3 Pro, October 2025)

Gemini’s biggest differentiator is how tightly it integrates with Google Workspace. If your day runs through Docs, Drive, or Gmail, Gemini can work directly with your actual files — not just generic knowledge pulled from the internet. That makes it genuinely useful for context-aware drafting and analysis. The standalone chat experience covers writing, coding, and research at a competitive level, but the Workspace integration is where it earns its keep.

Microsoft Copilot (Microsoft — Model: GPT-5 Powered, October 2025)

Microsoft’s approach was to embed AI inside the tools people already use rather than create another destination to visit. Copilot lives inside Windows, Word, Excel, Teams, and Outlook. Ask it to summarize a Teams meeting, make sense of an Excel dataset, or draft a Word document — it works without you leaving the application. For anyone whose workday runs on Microsoft 365, this is a meaningful productivity upgrade that doesn’t require changing how you work.

NotebookLM (Google — Model: Gemini Powered, November 2025)

NotebookLM takes a fundamentally different approach to AI. It only answers questions based on documents you upload — no wandering outside the source material, no fabricated facts. You give it your PDFs, research papers, or notes, and it synthesizes answers strictly from that content. The feature that surprised me most is “Audio Overviews” — it generates a two-host podcast-style audio summary of your uploaded materials. For reviewing dense reading material on a commute, that’s genuinely useful.

Apple Intelligence (Apple — Model: Native iOS/macOS, On-Device)

Apple Intelligence is less of a chatbot and more of a layer built directly into iOS and macOS. Its defining characteristic is on-device processing — the AI runs locally on your hardware rather than sending data to a remote server, which has real privacy implications worth paying attention to. It can read your screen context and take actions across apps. You don’t go to it. It’s just there.

Claude (Anthropic — Model: Claude 4.5 Family, Late 2025)

Full disclosure: Claude is what I use most, so I have some bias here. Its standout technical feature is a very large context window — meaning you can feed it lengthy documents, large codebases, or extended conversations without it losing the thread. It handles writing, analysis, and coding well. Anthropic has put particular emphasis on thoughtful, careful reasoning, which tends to show up when you push it on complex or nuanced questions.

Perplexity (Perplexity AI — Model: v2025, April 2025)

Perplexity calls itself an “answer engine” rather than a chatbot, and the distinction matters. Every response comes with inline citations linking back to the sources it drew from. If you’re doing research and need to verify claims rather than just receive them, this is the right tool. It’s a better-structured alternative to a traditional web search — especially when you want a synthesized, sourced answer instead of a list of links to work through yourself.

Grok (xAI — Model: Grok 4, July 2025)

Grok’s defining advantage is real-time access to posts on X (formerly Twitter). Most AI models work from training data with a cutoff date — Grok can pull live social data to gauge current events and public sentiment as they happen. For anyone tracking breaking news, monitoring a topic, or analyzing how something is being discussed right now, that’s a capability the other general-purpose tools don’t have.


Specialized Tools: Creative, Code & Web

Beyond the general assistants, the course highlighted tools built for specific workflows — image and video creation, software development, and automated web tasks. These aren’t trying to do everything. They’re built to do one thing very well.

Google AI Studio (Google — Models: Nano Banana Pro, Veo 3.1, October 2025)

Google AI Studio is the platform where Google’s most capable media generation models live. Veo 3.1 handles video, and the image models support photo-realistic generation with the ability to maintain consistent characters across a series of images — a problem that has historically been difficult to solve in AI image tools. It’s geared toward professional media creators who need precise, repeatable results.

Midjourney (Midjourney — Model: v7, June 2025)

Midjourney remains the dominant name in AI-generated concept art and stylized illustration. Version 7 added “Omni Reference,” which maintains visual consistency for characters across multiple images — a long-standing frustration in AI art workflows. A “Draft Mode” speeds up iteration considerably. If the goal is high-quality artistic output rather than photorealism, this is still the tool most creative professionals reach for first.

Stable Diffusion (Stability AI — Model: SD3, June 2025)

Stable Diffusion stands apart from the rest of this list because it’s an open-weight model — meaning you can download it and run it entirely on your own hardware. No data leaves your machine. That privacy-first architecture makes it the default choice for enterprises with data sovereignty requirements. SD3 also handles text rendering in images notably better than most competitors, which has historically been one of the harder problems in AI image generation.

Sora (OpenAI — Model: Sora 2, September 2025)

Sora is OpenAI’s video generation tool. Sora 2 raised the quality bar considerably — the physics simulation holds together well enough that generated scenes feel coherent in a way earlier AI video didn’t. A standout capability is compositing real people or objects into AI-generated scenes with synchronized audio. The implications for storytelling, marketing, and production work are significant.

Hailuo (MiniMax — Model: Hailuo-02, June 2025)

Hailuo generates native 1080p video with precise camera control — you can specify framing and movement in a way that actually translates to the output. The physics simulation handles motion-heavy content well. It’s positioned for social media creators and anyone producing short-form video that needs to look polished without a full production budget behind it.

GitHub Copilot (Microsoft / GitHub — Model: Copilot 2025 with GPT-5)

GitHub Copilot has become the standard AI coding companion in professional development. The 2025 version integrates GPT-5 and adds voice-to-code, visual code understanding, and real-time pair programming directly inside your editor. It suggests completions, catches bugs, and explains unfamiliar code as you work. For anyone writing software regularly, it’s a genuine productivity multiplier.

Atlas (OpenAI — Model: ChatGPT Browser, October 2025)

Atlas embeds ChatGPT directly into your browser as an agent that can act on the web on your behalf — not just answer questions in a chat window. It remembers the context of pages you visit and builds “browser memories” to recall your preferences over time. Useful for deep research sprints, event planning, and booking tasks that benefit from a persistent assistant rather than starting from scratch every session.

Meteor (Meteor — Model: Meteor Browser, August 2025)

Meteor builds the AI natively into the browser itself rather than layering it on as an extension. The design philosophy is proactive automation: it recognizes when you’re doing something repetitive — filling out a form, scheduling a meeting, completing an application — and steps in to handle it. It’s aimed at the kind of low-value personal admin that quietly consumes more of the day than it should.


What Can You Actually Do With These?

The course frames the practical application well: you don’t need a technical background, and you don’t need to know how to code. If you can type a question into a search box, you can use these tools productively today.

At work, that looks like drafting and refining emails faster, summarizing long reports or documents, generating slides or reports from a prompt, writing and debugging code, and analyzing data. At school, it looks like synthesizing dense reading into usable notes, practicing conversational language skills, or turning lecture material into audio reviews with NotebookLM. And beyond work and school — writing, art, video, games — the tools are genuinely open-ended in ways that are still being figured out.

The harder skill — and the one the AI Essentials course is really building toward — is knowing what to ask, and having enough context about how these tools work to evaluate what comes back critically.

Getting started is easy. Getting good at it is the actual work.


Eric Andrew Kristof is a Registered Nurse and Healthcare IT professional based in Hot Springs Village, Arkansas. He holds CompTIA A+ and CompTIA AI Essentials certifications, is a Microsoft Certified Professional, and is a member of HIMSS. He is currently seeking Healthcare IT roles in clinical informatics, EHR implementation, and end user services.

Reach him at eric.andrew.kristof@gmail.com or connect on LinkedIn.

← Back to Blog

One response to “AI Tools and Models: What I Learned from the CompTIA AI Essentials v2 Course”

  1. Study Finds 87% of AI Coding Agent Output Contains Flaws[yahoo]

    DryRun Security’s first Agentic Coding Security Report found that Claude, Codex, and Gemini all introduced vulnerabilities while building real applications.[Attachment +1]
    • DryRun Security identified 143 security issues across 38 scans of code produced by three leading AI coding agents while they built complete applications.[yahoo +1]
    • According to the report, Anthropic’s Claude produced the highest number of unresolved critical vulnerabilities, while OpenAI’s Codex finished with the fewest.[Attachment +1]
    • Four authentication flaws appeared in every final codebase, and no agent delivered a fully secure application, DryRun stated.[yahoo +1]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.