<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="http://jeffkeltner.com/feed.xml" rel="self" type="application/atom+xml" /><link href="http://jeffkeltner.com/" rel="alternate" type="text/html" /><updated>2026-04-08T21:43:10+00:00</updated><id>http://jeffkeltner.com/feed.xml</id><title type="html">Jeff Keltner</title><subtitle>Maker-of-trouble, stirrer-of-pots. I write about whatever comes to mind, but mostly about AI, technology, and policy.
</subtitle><author><name>Jeff Keltner</name></author><entry><title type="html">Stop Telling AI What You Want — Show It</title><link href="http://jeffkeltner.com/2026/04/08/stop-telling-ai-show-it.html" rel="alternate" type="text/html" title="Stop Telling AI What You Want — Show It" /><published>2026-04-08T00:00:00+00:00</published><updated>2026-04-08T00:00:00+00:00</updated><id>http://jeffkeltner.com/2026/04/08/stop-telling-ai-show-it</id><content type="html" xml:base="http://jeffkeltner.com/2026/04/08/stop-telling-ai-show-it.html"><![CDATA[<p>Most advice about working with AI boils down to some version of “be specific about what you want.” Write a better prompt. Describe the output in detail. Give clear instructions. That’s fine — but I’ve found there’s a much more powerful move that most people skip entirely.</p>

<p>Show it what good looks like.</p>

<!--more-->

<h2 id="the-problem-with-telling">The problem with telling</h2>

<p>Here’s the thing about describing what you want: you don’t always know. Or more precisely — you know it when you see it, but you can’t fully articulate it. This is especially true for anything stylistic or nuanced.</p>

<p>I ran into this head-on when I started building a new podcast with my friend Cyrus. We wanted the AI-generated scripts to sound like <em>us</em> — not generic podcast-host-voice, but the way Cyrus and I actually talk to each other. The way we interrupt, riff, push back, land jokes.</p>

<p>So I tried telling it. I wrote descriptions of how each of us speaks. I explained our dynamic — who tends to set up points, who tends to land them, how we use humor. I spent a lot of time on this. And the scripts came back… fine. Competent. But they didn’t sound like us.</p>

<p>Then I had a different idea. Instead of describing our voices, I just gave it transcripts of our actual conversations. Real ones, unedited. And the difference was immediate. The AI picked up on patterns I hadn’t even thought to mention — little verbal tics, the way we build on each other’s points, the rhythm of how we go back and forth. Things I couldn’t have described because I wasn’t consciously aware of them.</p>

<p>That’s the core insight: <strong>you can’t tell an AI about things you don’t know you know.</strong> But you can show it examples that contain those things, and let it figure them out.</p>

<h2 id="letting-ai-learn-what-you-didnt-think-to-teach">Letting AI learn what you didn’t think to teach</h2>

<p>I saw this play out even more clearly with my other podcast, What the AI?!, where I co-host with Annie. We have a pretty dialed-in workflow — AI helps generate the script, we record, and then I feed the transcript back in so the system can learn from it.</p>

<p>One lesson it picked up on its own was particularly sharp. We’d recorded an episode where we ran long — too many stories, not enough time — and ended up skipping the last story entirely. The AI noticed this. On its own, it added a check to its script-writing process: make sure the final story in the rundown is skippable. Keep the most important stories earlier in the show so that if we have to cut, we’re not losing something critical.</p>

<p>I never would have thought to write that as an instruction. It’s the kind of operational wisdom that only emerges from watching real work happen — from seeing where the plan met reality and broke down. But because I showed the AI the gap between the script and what we actually recorded, it found the lesson itself.</p>

<h2 id="why-this-works">Why this works</h2>

<p>There’s a useful analogy here to how people learn. If you’re training a new hire, you can hand them a style guide and a list of dos and don’ts. That helps. But they’ll learn far more from sitting in on a few meetings, reading a few real examples of great work, and seeing how the team actually operates.</p>

<p>AI is similar. Instructions set a baseline, but examples create understanding. And the richest examples are messy, real-world ones — not polished samples you curated to illustrate a point, but the actual artifacts of your work. Transcripts, drafts, email threads, before-and-after edits. The stuff that captures all the things you know implicitly but would never think to write down.</p>

<h2 id="the-practical-takeaway">The practical takeaway</h2>

<p>Next time you’re struggling to get AI to produce something that feels right, resist the urge to write a longer, more detailed prompt. Instead, ask yourself: <strong>do I have examples of what good looks like?</strong></p>

<p>Feed it past work you’re proud of. Show it the real conversations, not your description of them. Give it the before and after so it can see what changed. Let it find the patterns — including the ones you didn’t know were there.</p>

<p>You’ll be surprised how much it picks up that you never thought to mention.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[Most advice about working with AI boils down to some version of “be specific about what you want.” Write a better prompt. Describe the output in detail. Give clear instructions. That’s fine — but I’ve found there’s a much more powerful move that most people skip entirely. Show it what good looks like.]]></summary></entry><entry><title type="html">Why I’m an AI Optimist</title><link href="http://jeffkeltner.com/2026/04/02/why-im-an-ai-optimist.html" rel="alternate" type="text/html" title="Why I’m an AI Optimist" /><published>2026-04-02T00:00:00+00:00</published><updated>2026-04-02T00:00:00+00:00</updated><id>http://jeffkeltner.com/2026/04/02/why-im-an-ai-optimist</id><content type="html" xml:base="http://jeffkeltner.com/2026/04/02/why-im-an-ai-optimist.html"><![CDATA[<p>I spend most of my time these days reading, thinking, writing, and podcasting about AI. And I’m optimistic about where it’s headed — substantively optimistic. Not in the “everything will be fine, don’t worry” sense, but in the “history gives us strong reasons to believe this will be net positive” sense.</p>

<!--more-->

<p>That puts me in something of a minority. Polls show Americans are increasingly pessimistic about AI. The loudest voices in the conversation tend to be warning about job losses, existential threats, or societal collapse. And I get the appeal of those arguments — they’re vivid and specific in a way that optimism often isn’t. It’s easy to picture the jobs that disappear. It’s much harder to picture the ones that don’t exist yet.</p>

<p>But I think the pessimists are mostly wrong — and the optimists have a stronger case than they usually make. Here’s mine.</p>

<h2 id="why-the-doomers-are-overwrought">Why the Doomers Are Overwrought</h2>

<p>I’d break the AI pessimists into three broad camps: economic doomers, existential doomers, and societal doomers. Each has a version of the argument that sounds compelling. But I think they all share a common mistake — overestimating how “this time is different” and underestimating what history actually teaches us about how these transitions play out.</p>

<h3 id="economic-doomers">Economic Doomers</h3>

<p>The economic doomers believe AI will cause mass unemployment — that it’ll replace huge swaths of white-collar work quickly, concentrating wealth among the few who own the technology. I understand the fear. But I think it dramatically overestimates the speed at which these changes actually work through the economy.</p>

<p>Anyone who studies forecasting knows a useful principle: we tend to underestimate the base case and overestimate how “this time is different.” I think that’s exactly what’s happening in the AI conversation. The base case — what history actually teaches us about technology adoption — is that these transitions are slow. Not because the technology isn’t powerful, but because organizations are slow. Change management is hard. Retraining takes time. Entire industries don’t restructure overnight.</p>

<p>Think about accountants. Spreadsheets didn’t eliminate accounting — they transformed it. There are more accountants today than before spreadsheets existed, and they do more sophisticated work. More people have access to accounting, too. The profession expanded because the tool made it more accessible and more valuable, not less.</p>

<p>Or think about this: twenty-five years ago, “mobile app developer” wasn’t a job. It wasn’t even a concept. Today it’s a massive job category supporting entire businesses and industries. It was essentially impossible to picture that role before the iPhone existed. And that’s the pattern — it’s easy to see the jobs that get disrupted. It’s hard to imagine the new ones that emerge. That asymmetry is a big part of why the negative argument feels more intuitive than the positive one. But the positive one is what history actually delivers.</p>

<p>Will AI change work? Absolutely — and massively. But not as rapidly as the doomers suggest. And will the endpoint be net positive? I believe it will — because that’s what every major technological revolution in history has produced, once we’ve had time to adapt.</p>

<h3 id="existential-doomers">Existential Doomers</h3>

<p>The existential doomers worry about something much bigger — that AI will become a superintelligence, a digital god that either destroys or subjugates humanity. The Skynet scenario. I take this concern seriously, and I can’t prove it’s impossible. Nobody can prove a negative.</p>

<p>But I don’t see any evidence that we’re close to that kind of capability. What we have today — and what we’re likely to have for the foreseeable future — are very powerful tools. Not sentient beings. Not digital gods. Very, very good tools that can process information and identify patterns at scales humans can’t match. That’s world-changing, but it’s a fundamentally different thing than the sci-fi scenario.</p>

<p>Arvind Narayanan and Sayash Kapoor make this case well in their essay <a href="https://knightcolumbia.org/content/ai-as-normal-technology">“AI as Normal Technology”</a> — even transformative, general-purpose technologies like electricity and the internet are still “normal” technologies. AI may be the most important technology of our lifetimes, but it’s still a technology. Not a new species. I think treating it that way leads to much better decisions than treating it as an existential threat.</p>

<h3 id="societal-doomers">Societal Doomers</h3>

<p>The societal doomers worry that AI will rip apart the fabric of society — through misinformation, manipulation, deepfakes, erosion of trust. And I’ll be honest: I think this camp has the most legitimate concerns of the three. These are real risks.</p>

<p>But here’s what I keep coming back to: we’re already experiencing most of these problems. The internet, social media, algorithmic feeds, smartphones — these technologies have already strained our information ecosystem, polarized our politics, and created real challenges for mental health and social trust. These aren’t hypothetical concerns. They’re the world we live in right now.</p>

<p>I don’t think AI represents a step-function change in those problems. It’s a continuation of trends that started well before large language models existed. And if anything, AI might actually give us better tools to address some of these challenges — from detecting misinformation to personalizing education to making complex systems more navigable.</p>

<p>I’m not dismissing the risks. I’m saying they’re not new, and I’m more optimistic about our ability to manage them than most.</p>

<h2 id="the-real-upside--and-why-its-bigger-than-chatbots">The Real Upside — and Why It’s Bigger Than Chatbots</h2>

<p>Here’s where I think the conversation gets most interesting — and where the doomers most badly miss the mark. When most people think about AI, they picture chatbots, image generators, maybe a coding assistant. I’m excited about those things and the impact they’re already having. But they’re just a small part of the story.</p>

<p>The thing that excites me most is AI’s ability to accelerate discovery — particularly in fields where the bottleneck isn’t creativity or insight but the sheer scale of possibilities to explore.</p>

<p>Take medicine and biology. AlphaFold solved the protein folding problem — a challenge that had stumped researchers for decades — by doing something humans simply can’t: systematically exploring an enormous possibility space and identifying the structures that work. That’s not “artificial genius.” It’s a fundamentally different kind of tool — one that can sift through millions of potential drug compounds, protein structures, or genetic combinations and surface the most promising candidates for human researchers to investigate.</p>

<p>This pattern applies well beyond biology. In material science, we’re using AI to evaluate thousands of potential battery chemistries to find the ones worth investing in. In energy, AI is helping optimize everything from grid management to the search for better solar cell materials. In climate science, it’s accelerating the modeling of complex systems that are too large and interconnected for humans to analyze alone.</p>

<p>Dario Amodei, the CEO of Anthropic, wrote a <a href="https://darioamodei.com/essay/machines-of-loving-grace">long essay</a> laying out many of these potential upsides in detail. His framing is different from mine — I don’t love the anthropomorphizing of AI as “geniuses in a data center” — but the underlying point resonates. We’re building tools that can explore possibility spaces at a scale that was previously unimaginable. The question isn’t whether that’s powerful. It’s whether we’ll let ourselves use it.</p>

<p>And that’s the pattern that gives me the most confidence. This isn’t speculation about some far-off future. AlphaFold exists today. AI-assisted drug discovery is happening now. The tools that will help us address climate change, develop better energy storage, discover new materials — those are in progress, not in the realm of science fiction.</p>

<p>When I look at the history of technology, this is what the big revolutions actually do. They don’t just automate existing work — they open up entirely new possibilities that we couldn’t have imagined before. The printing press didn’t just make scribes faster. The internet didn’t just make mail faster. And AI won’t just make knowledge workers faster. It will let us attempt things we couldn’t have attempted at all.</p>

<h2 id="the-one-thing-i-worry-about">The One Thing I Worry About</h2>

<p>So if I’m this optimistic, what keeps me up at night? Regulation.</p>

<p>Not the existence of regulation — some regulation is necessary and important. What worries me is that we’ll over-regulate AI out of fear, locking it away before we get the chance to realize its benefits. That we’ll see the risks — which are real — and respond by putting this technology in a box.</p>

<p>I keep thinking about nuclear power. Here was a technology with the potential to fundamentally transform our energy infrastructure and help address climate change. And we effectively regulated it into irrelevance. Not because the technology didn’t work, but because the fear of what it could do outweighed the appreciation of what it could deliver. Decades later, we’re desperately trying to reverse course as we realize how much that decision cost us.</p>

<p>I worry we’re on that same path with AI. The doomer narratives are loud. Regulation is the natural response to fear. And if we’re not careful, we’ll end up in a world where we’ve mitigated the downsides but also forfeited the upsides — the medical breakthroughs, the scientific acceleration, the expansion of what’s possible.</p>

<p>For what it’s worth, I’m pretty optimistic about the private market figuring out education and corporate adoption. Companies will adapt because they have to — the competitive pressure is too strong. But regulation is the one area where well-intentioned decisions, driven by fear, could hold us back.</p>

<h2 id="weve-done-this-before">We’ve Done This Before</h2>

<p>I’ll close with the observation that gives me the most comfort. We’ve been here before — and we’ve gotten through it.</p>

<p>There was a time when the vast majority of humans worked in agriculture. That’s not true anymore. We didn’t end up with mass permanent unemployment. We found new kinds of work, built new industries, adapted our institutions. It wasn’t always smooth or fast or painless. But we adapted.</p>

<p>I don’t pretend to know exactly what the AI-enabled future looks like. Nobody does. But I think the historical base rate is overwhelmingly on the side of optimism — not naive optimism, but the earned kind. The kind that says: this will be hard, there will be real challenges, and we’ll figure it out. We always have.</p>

<p>The biggest risk isn’t that AI changes too much. It’s that we don’t let it change enough.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[I spend most of my time these days reading, thinking, writing, and podcasting about AI. And I’m optimistic about where it’s headed — substantively optimistic. Not in the “everything will be fine, don’t worry” sense, but in the “history gives us strong reasons to believe this will be net positive” sense.]]></summary></entry><entry><title type="html">Models Aren’t Defensible</title><link href="http://jeffkeltner.com/2025/12/02/models-arent-defensible.html" rel="alternate" type="text/html" title="Models Aren’t Defensible" /><published>2025-12-02T00:00:00+00:00</published><updated>2025-12-02T00:00:00+00:00</updated><id>http://jeffkeltner.com/2025/12/02/models-arent-defensible</id><content type="html" xml:base="http://jeffkeltner.com/2025/12/02/models-arent-defensible.html"><![CDATA[<p>There’s a lot of AI news this week — but much of it kept bringing me back to the reality that ultimately models aren’t going to be defensible. That’s not to say that aren’t incredibly valuable and hard to design. But while models may ultimately create a lot of value, I think it will be hard to rely on creating a model to capture that value?</p>

<!--more-->

<p>Why? Mostly because models have turned out to be somewhat undifferentiated. Every time a new model comes out with remarkable new capabilities, another matches it within a few weeks — often a more efficient model. Moreover, most models are now more than capable enough for the vast majority of every day use cases. User may prefer one model over another — but likely not enough to pay a premium for it.</p>

<p>So, who will be able to capture the value. I see a few potential winners.</p>

<ol>
  <li><strong>Workflow Systems.</strong> Most LLMs are going to end up being utilized in context of a workflow. The systems that own those workflows (think EHRs in medicine or CRMs in sales) wil be able to capture a lot of value by orchestrating the right models and the the right prompts and their data. This could be existing players or new entrants.</li>
  <li><strong>Data Systems.</strong> For a personal agent to be useful, it needs access to your personal data. Companies with access to that data will be able to capture more of the value of enabling AI on top of it than those with just a model. Think Microsoft and Google.</li>
  <li><strong>User-Facing Winner(s).</strong> There is likely to be one big winner in the consumer-facing brand of AI. Just as Google became synonymous with search. Once that user habit is engrainged, you don’t need to be the best to maintain it. Right now, this looks like ChatGPT — though never count at Google (especailly given their existing search distribution). To win this war it’s likely you will need to build your own foundational model — but having a great model won’t be enough.</li>
</ol>

<p>So, where does that leave the players in the space?</p>

<p>OpenAI is winning on 3 right now. It’s trying to tackle 2 through integrations. This framework would suggest they should lean into that hard and move fast.</p>

<p>Anthropic isn’t winning any of these right now either. This framework indicates they are not in a great position.</p>

<p>Microsoft has real advantages in 2 and to some extent 1. That may be enough to win large deals in the enterprise space. I don’t see much of a path for them on the consumer side (though they will try to leverage Windows for 2).</p>

<p>Grok isn’t winning on any of these at the moment. Neither is Meta.</p>

<p>There is a case to make that Google is doing well on all 3. GCP is a solid contender in enterprise data and workflows. Gmail / Docs has a lot of consumer data. And more people likely interact with Gemini through Google search than use ChatGPT. It does seem like Google has the most paths to success at this point — including their own model.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[There’s a lot of AI news this week — but much of it kept bringing me back to the reality that ultimately models aren’t going to be defensible. That’s not to say that aren’t incredibly valuable and hard to design. But while models may ultimately create a lot of value, I think it will be hard to rely on creating a model to capture that value?]]></summary></entry><entry><title type="html">Regulators Make Bad Product Designers</title><link href="http://jeffkeltner.com/2025/12/02/regulators-make-bad-product-designers.html" rel="alternate" type="text/html" title="Regulators Make Bad Product Designers" /><published>2025-12-02T00:00:00+00:00</published><updated>2025-12-02T00:00:00+00:00</updated><id>http://jeffkeltner.com/2025/12/02/regulators-make-bad-product-designers</id><content type="html" xml:base="http://jeffkeltner.com/2025/12/02/regulators-make-bad-product-designers.html"><![CDATA[<p>At <a href="https://www.theverge.com/news/823788/europe-cookie-prompt-browser-changes-proposal">long last</a> EU regulators are going to do something about the horrific slate of cookie banners that have descended on the web due to poor EU regulations (that have done nothing to protect privacy, best I can tell).</p>

<!--more-->

<p>If you hate cookie banners (and who doesn’t), this seems like a clear win. But while the proposed solution (allowing users to specify their preference one time in their browser instead of on every site) will certainly improve everyone’s online browsing experience, it still doubles down on the horrible idea of asking regulators to play product manager.</p>

<p>It turns out that product management is hard. Many companies are quite bad at it. But regulators are usually worse. We would all be better off if regulators gave broad guidance and then let product companies compete and innovate to deliver the best experiences.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[At long last EU regulators are going to do something about the horrific slate of cookie banners that have descended on the web due to poor EU regulations (that have done nothing to protect privacy, best I can tell).]]></summary></entry><entry><title type="html">AI: Car, Bus, or Road?</title><link href="http://jeffkeltner.com/2025/07/09/ai-car-bus-or-road.html" rel="alternate" type="text/html" title="AI: Car, Bus, or Road?" /><published>2025-07-09T00:00:00+00:00</published><updated>2025-07-09T00:00:00+00:00</updated><id>http://jeffkeltner.com/2025/07/09/ai-car-bus-or-road</id><content type="html" xml:base="http://jeffkeltner.com/2025/07/09/ai-car-bus-or-road.html"><![CDATA[<p>The conversation around public AI is messy and often unproductive. Part of the problem is that we don’t have a clear framework for thinking about how to govern AI. We end up with vague calls for government to “do more,” without clarity on what “more” actually means or where it makes sense.</p>

<!--more-->

<p>Metaphors can help. We don’t regulate highways the same way we regulate sports cars or school buses. So why should we talk about regulating “AI” like it’s a single thing?</p>

<p>Here’s a better way: think of the AI ecosystem as made up of four key components — cars, roads, buses, and traffic laws. This metaphor gives us a much clearer sense of where government should act, and just as importantly, where it shouldn’t.</p>

<h3 id="the-framework-mapping-the-ai-ecosystem"><strong>The Framework: Mapping the AI Ecosystem</strong></h3>

<p><img src="/assets/images/ai-car-bus-road-framework.png" alt="AI ecosystem framework: cars, roads, buses, and traffic laws" /></p>

<p><strong>Cars</strong> are like ChatGPT, Claude, Midjourney, or domain-specific prediction tools in finance or healthcare. The market thrives on competition and diversity here. We want lots of options built for different needs. Government should regulate for safety, not build the engines.</p>

<ul>
  <li><strong>Roads</strong> are the open-access infrastructure that makes innovation possible — datasets, open-source models, interoperability standards. The private sector underinvests in these because they benefit everyone. Government should step in here.</li>
  <li><strong>Buses</strong> are the AI tools built or funded by the public sector for public needs: think medical support for rare conditions or tools to navigate complex public systems like taxes or housing. These exist where private incentives fall short.</li>
  <li><strong>Traffic Laws</strong> are the baseline guardrails that apply to everyone — factual accuracy in critical domains, child safety standards, content restrictions. They set the rules of the road without picking winners.</li>
</ul>

<h3 id="how-the-metaphor-helps-us-make-better-decisions"><strong>How the Metaphor Helps Us Make Better Decisions</strong></h3>

<p><strong>Public Roads: Open Infrastructure for Innovation</strong> Public investment should go toward foundational infrastructure like high-quality training datasets, especially in underrepresented domains (e.g., rare languages or public health). These are the roads AI travels on. They’re often invisible to users but essential for the whole system to function.</p>

<p>Some open-source base models might also fit here. They aren’t tailored to specific needs but serve as starting points for innovation — like roads that lead to different destinations.</p>

<p><strong>Private Cars: Let the Market Compete</strong> The market is great at building diverse, purpose-built models and applications. From legal summarization tools to recipe assistants, medical diagnostics, financial forecasting systems, and image or video generation tools, the diversity of “cars” is a feature, not a bug. We don’t want the government designing the next minivan or image diffusion model.</p>

<p>What we do want are smart traffic laws to ensure these tools are safe and fair. But the government shouldn’t try to build or compete in this layer directly — just as it doesn’t manufacture vehicles.</p>

<p><strong>Public Buses: Fill the Gaps Where Markets Don’t</strong> Some needs won’t be met by the market. Think of medical AI for rare diseases — low commercial return, high social value. Or government services made navigable through AI — tax filing help, housing aid, legal guidance.</p>

<p>These aren’t replacements for private tools. Just like buses don’t replace cars, they offer equitable access to essential services.</p>

<p><strong>Traffic Laws: Guardrails for a Shared System</strong> Some things need universal rules: adversarial robustness, child safety, misinformation, copyright compliance. These are the seatbelts and stop signs of the AI world. They don’t limit innovation — they make the whole system work better.</p>

<p>Good traffic laws don’t pick winners. They ensure safety while allowing a wide range of vehicles (and drivers) to share the road.</p>

<h3 id="why-this-framework-matters--and-how-to-use-it"><strong>Why This Framework Matters — And How to Use It</strong></h3>

<p>Many people don’t trust “big tech” to build AI that is in the best interest of society — or just don’t trust them at all. But before we start talking about the role government or public AI should play, we need to clarify what we mean. Are we talking about government-run AI? Open-access tools? Rules and oversight?</p>

<p>This framework helps clarify those questions:</p>

<ul>
  <li>Should we regulate this model? (Car)</li>
  <li>Should we fund this tool as infrastructure? (Road)</li>
  <li>Should we ensure this capability is accessible to underserved communities? (Bus)</li>
  <li>Do we need new rules to keep people safe? (Traffic laws)</li>
</ul>

<p>Rather than vague calls for government to “do more,” it invites sharper, more targeted questions — and encourages a smart division of labor between public and private actors. We don’t need the government to “build AI” — we need it to pave the roads, enforce the rules, and run the buses that help everyone get where they need to go.</p>

<p>By separating the roles of infrastructure, market competition, public access, and regulation, we can build an AI ecosystem that’s not just powerful, but also fair, inclusive, and sustainable. Let’s stop asking if government should “do more on AI.” Let’s ask what part of the AI stack we’re talking about — and whether it fits best as a car, a road, a bus, or a traffic law.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[The conversation around public AI is messy and often unproductive. Part of the problem is that we don’t have a clear framework for thinking about how to govern AI. We end up with vague calls for government to “do more,” without clarity on what “more” actually means or where it makes sense.]]></summary></entry><entry><title type="html">The Path Forward for AI in Education</title><link href="http://jeffkeltner.com/2025/06/02/the-path-forward-for-ai-in-eduction.html" rel="alternate" type="text/html" title="The Path Forward for AI in Education" /><published>2025-06-02T00:00:00+00:00</published><updated>2025-06-02T00:00:00+00:00</updated><id>http://jeffkeltner.com/2025/06/02/the-path-forward-for-ai-in-eduction</id><content type="html" xml:base="http://jeffkeltner.com/2025/06/02/the-path-forward-for-ai-in-eduction.html"><![CDATA[<p>We don’t yet know the destination, but we do know the path forward. At least, we know how to start walking it.</p>

<!--more-->
<p>AI’s impact on education is inevitable. As a parent, AI advisor, lifelong learner, and school trustee, I’ve had more than a few conversations about what to do next. One thing is clear: the cost of waiting is higher this time. In past tech shifts—whether the internet, mobile, or 1:1 devices—schools had time to prepare, debate, and cautiously adopt. AI doesn’t afford us that luxury. This wave is already here, and already in students’ hands.</p>

<p>So, what’s a school to do?</p>

<p>Here’s how I think about the path forward: act quickly, embrace experimentation, and be ready to change course. If we’re going to take advantage of this moment, I think there are three core principles to guide us.</p>

<h3 id="1-lean-into-the-ai-opportunity"><strong>1. Lean Into the AI Opportunity</strong></h3>

<p>The natural instinct for many educators is to focus on what’s being lost: originality in writing, authenticity in homework, control in the classroom. But there’s just as much—if not more—to be gained.</p>

<p>A recent <a href="https://www.nature.com/articles/s41599-025-04787-y">Nature study</a> found that AI use in education can improve learning performance, boost student engagement, and support higher-order thinking. That’s not a risk—it’s an opportunity.</p>

<p>So how do we lean in?</p>

<ul>
  <li><strong>Educate your teachers.</strong> Not just about what the tools are, but what they can do. Get them hands-on experience. Encourage cross-school conversations. Set aside time and budget for real professional development—not one-time webinars, but ongoing learning.</li>
  <li><strong>Educate your students.</strong> Help them see AI as a tool for learning, not cheating. Teach them how to use it responsibly and creatively. They’ll need that skill in the workplace—and in life.</li>
  <li><strong>Build a community of practice.</strong> This might be the most important step. Encourage teachers to share what’s working, what failed, and what surprised them. Celebrate successes just as much as you learn from failures. Create a space for experimentation and iteration—a culture where it’s okay to try, okay to fail, and even better to share what works.</li>
</ul>

<h3 id="2-rethink-assessment"><strong>2. Rethink Assessment</strong></h3>

<p>This may be the hardest challenge ahead. We can already see clear, practical ways to use AI to support teaching and learning—but adapting assessment to an age of abundant AI remains the tougher challenge.</p>

<p>The traditional pillars of academic evaluation—essays, take-home assignments, even some problem sets—are no longer reliable when AI can produce competent versions of each. And while the instinct to “ban” AI use is understandable, it’s unlikely to be enforceable or effective.</p>

<p>So, we need to reimagine what assessment looks like:</p>

<ul>
  <li>
    <p><strong>More live, in-person evaluations.</strong> Whether it’s classroom discussions, oral exams, or group work, we’ll need to lean more on what can’t be outsourced to AI.</p>
  </li>
  <li>
    <p><strong>More project-based and interdisciplinary work.</strong> AI can be a great partner in exploration and execution. Let’s assess how students use it—not whether they avoid it.</p>
  </li>
  <li>
    <p><strong>More emphasis on thinking, not output.</strong> We’ve long claimed to teach critical thinking. Now we have to assess it directly.</p>
  </li>
</ul>

<p>This won’t be easy. But if we get it right, it may actually improve our approach to measuring what matters. And what matters most is not just a set of academic skills, but the broader capabilities we hope to nurture in students: curiosity, resilience, collaboration, creativity, and critical thinking. These are harder to measure—but ultimately more important to capture.</p>

<h3 id="3-move-fastbut-be-ready-to-change-course"><strong>3. Move Fast—but Be Ready to Change Course</strong></h3>

<p>The tools are evolving quickly. So is our understanding of how they work—and how they break.</p>

<p>That means we need to start moving. But we also need to stay nimble. Our first steps won’t be perfect. Our policies will need updates. What works today might fail tomorrow. That’s okay.</p>

<p>The real danger isn’t making a wrong call. It’s standing still.</p>

<p>So make your best guess about what the next right step is. Take it. Then pay attention, evaluate, and be ready to adapt. Don’t let the pursuit of perfect paralyze progress. And don’t let uncertainty stop you from starting.</p>

<h3 id="keep-the-student-at-the-center"><strong>Keep the Student at the Center</strong></h3>

<p>Above all, it is important to remember to keep our focus where it belongs—on the student.</p>

<p>Their learning, their excitement, their future. AI may be the catalyst, but the real opportunity is to do better for our students. To rethink how we teach, what we teach, and why we teach it. To re-engage learners in new ways. To build a system that prepares them not for the world we knew, but for the world they’ll inhabit.</p>

<p>This is our moment. Let’s meet it with boldness, humility, and a willingness to try.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[We don’t yet know the destination, but we do know the path forward. At least, we know how to start walking it.]]></summary></entry><entry><title type="html">A Simple Mode for Google Docs</title><link href="http://jeffkeltner.com/2025/05/28/google_docs_simple_mode.html" rel="alternate" type="text/html" title="A Simple Mode for Google Docs" /><published>2025-05-28T00:00:00+00:00</published><updated>2025-05-28T00:00:00+00:00</updated><id>http://jeffkeltner.com/2025/05/28/google_docs_simple_mode</id><content type="html" xml:base="http://jeffkeltner.com/2025/05/28/google_docs_simple_mode.html"><![CDATA[<p>I’ve been a fan of Google Docs for years. The collaboration features are unmatched—being able to edit documents together, leave comments, and see changes in real time has fundamentally changed how I work with others. It’s one of those rare tools that genuinely makes teamwork easier and better.</p>

<!--more-->
<p>But as the product has evolved, it’s started to feel… heavier. I understand why Google has pushed for feature parity with Microsoft Word—adding formatting controls, layout options, and advanced styles. But sometimes I just want all that to go away.</p>

<p>I don’t want to think about margins or font types or line spacing. I don’t want to scroll through menus or figure out why one paragraph has a different style from the next. I just want to write.</p>

<p>What I wish Google Docs had is something I’d call “Simple Mode.” Imagine a stripped-down, markdown-style editor. Bold, italics, headers, bullet points, maybe tables—and that’s it. A mode where all the formatting complexity is hidden and the focus is on the words. When I change the font, it changes everywhere. When I write, I don’t have to worry about what’s happening behind the scenes with styles or layout. Just text on a page.</p>

<p>Yes, I know you can try to define a default font style. I know you can mess with the margins and zoom levels. But it never quite works right. Styles don’t apply consistently, margins are finicky, and the moment I copy-paste something in, the formatting gremlins return.</p>

<p>I don’t need this mode for everything. I understand that many people are using Docs to produce polished, formatted documents. But for drafting, note-taking, or just getting thoughts down—this would be such a welcome relief.</p>

<p>So if any of my old colleagues at Google are reading this: here’s an idea I’d love to see. A simple writing mode. One that gets everything else out of the way and just lets me focus on the words.</p>

<p>I promise I’d use it all the time.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[I’ve been a fan of Google Docs for years. The collaboration features are unmatched—being able to edit documents together, leave comments, and see changes in real time has fundamentally changed how I work with others. It’s one of those rare tools that genuinely makes teamwork easier and better.]]></summary></entry><entry><title type="html">AI and Work: The Blurry Line Between Augmenting and Replacing Humans</title><link href="http://jeffkeltner.com/2025/05/15/ai-and-work-augmenting-vs-replacing-humans.html" rel="alternate" type="text/html" title="AI and Work: The Blurry Line Between Augmenting and Replacing Humans" /><published>2025-05-15T00:00:00+00:00</published><updated>2025-05-15T00:00:00+00:00</updated><id>http://jeffkeltner.com/2025/05/15/ai-and-work-augmenting-vs-replacing-humans</id><content type="html" xml:base="http://jeffkeltner.com/2025/05/15/ai-and-work-augmenting-vs-replacing-humans.html"><![CDATA[<p>Many people who discuss the future of work in a world of AI talk about using AI to “augment and not replace” humans. Indeed, it’s one of the core principles of the Stanford Human-Centered AI Institution (HAI): “AI should be designed to augment human capabilities, not replace them.”</p>

<!--more-->

<p>I’ve always found the distinction between “augmenting” and “replacing” human capabilities unclear. Partly, this confusion arises because these terms can be interpreted differently depending on context — and often, augmentation and replacement can really be the same thing. I’m a fan of examples to help make things clear, so let’s take a look at two.</p>

<p><strong>Marketing</strong></p>

<p>Perhaps a simple example is the generating of marketing assets — something many companies are already using AI for today. Whether it’s as simple as getting help with website copy or as complex as using AI video generation for TV ads — many (if not most) companies are using AI in some part of their marketing creative process. But does this count as augmenting or replacing?</p>

<p>Since humans are still involved in the process, there is a clear case that this simply augments those people. I think that’s a fair take. On the other hand, without AI there would have been <strong>more</strong> people involved in those processes. So, one could argue those people have been replaced.</p>

<p>Ultimately, any time we augment a human, we reduce the number of humans required to complete a given amount of work. This is why I think the augment vs replace distinction doesn’t make a lot of sense.</p>

<p><strong>Fighting Fraud</strong></p>

<p>Another way to look at the augment vs replace statement is that we shouldn’t replace human judgement with AI systems. That’s a little different than trying not to replace individual humans. Let’s think about this in the context of a fraud-fighting example.</p>

<p>Fighting fraud is actually something many financial institutions leverage AI for today. Do they fully outsource decisions to AIs in these instances, or do we just use the AI to augment human judgement? The answer is often both!</p>

<p>For instance, when AIs mark a transaction as very low likely to be fraud, often the transaction can be completed with no human review. This is an important case to remember when debating this topic because people greatly benefit from the speed of decision-making enabled by the AI. Credit Cards can’t work without this sort of system — imagine waiting in line at Starbucks for someone at Chase to review your transaction before it was approved!</p>

<p>On the other hand, often when the AIs flag a higher-risk transaction, there is often a manual review process. For larger loans (think car loans or home loans), this may just mean being asked for an additional document or getting on a call with a representative. In the credit card example, you might be able to text back that the transaction is approved or call up the company to validate the purchase.</p>

<p>In those instances, we have put the human back in the loop and are using AI to augment their decision processes. And this makes sense. But the blanket idea that we shouldn’t replace human judgement with AI doesn’t make sense — we’d all be stuck in credit card lines all day long!</p>

<p><strong>It Depends</strong></p>

<p>And unfortunately, that brings us to the ending point of many discussions about AI — “it depends.” Putting AI in the real world involves a series of choices and trade-offs, and there really aren’t simple answers to any of them. The right answer truly does depend on circumstances and details.</p>

<p>But I do think that Stanford’s HAI has the right general principle — we should keep human beings at the center of our decision-making about those trade-offs. We should be thinking about the small businesses that can make their first video ad thanks to generative AI. We should be thinking about the customer stuck in line at Home Depot waiting for approval. And we should also be thinking about the customer misidentified as a fraudster looking for a way to set the record straight.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[Many people who discuss the future of work in a world of AI talk about using AI to “augment and not replace” humans. Indeed, it’s one of the core principles of the Stanford Human-Centered AI Institution (HAI): “AI should be designed to augment human capabilities, not replace them.”]]></summary></entry><entry><title type="html">3 Levels of Business Metrics</title><link href="http://jeffkeltner.com/2024/02/27/3-levels-of-business-metrics.html" rel="alternate" type="text/html" title="3 Levels of Business Metrics" /><published>2024-02-27T00:00:00+00:00</published><updated>2024-02-27T00:00:00+00:00</updated><id>http://jeffkeltner.com/2024/02/27/3-levels-of-business-metrics</id><content type="html" xml:base="http://jeffkeltner.com/2024/02/27/3-levels-of-business-metrics.html"><![CDATA[<p>I’ve had a lot of conversation about OKRs — and I’ve even written down my
thoughts on the subject a few times. I’ve also spent countless hours debating
what the right OKRs for a company or team are. As I’ve reflected on the
struggle to capture the right OKRs, I realize there is really a significant
tension between metrics that are measuring very different kinds of things.</p>

<!--more-->
<p>In fact, I think there are really three questions we are trying to address in
our business metrics.</p>

<ul>
  <li><strong>How well are we managing our business?</strong></li>
  <li><strong>How well are we executing our plans?</strong></li>
  <li><strong>How are we improving our business?</strong></li>
</ul>

<p>The challenge is really that each of these questions requires a different kind
of metric and approach to management. OKRs are far better suited to the last
question, than the first two. I actually think that many of the issues people
face when deploying OKRs is the tension of trying to fit the answer to all 3
of these questions into one goal setting framework.</p>

<p>For very early stage companies, it’s often true that the third is the only
truly relevant question. There is no existing business to really “manage” and
all plans are to improve the business. So, everything sort of meshes together,
and the single system can work. But for larger companies (or smaller companies
that have grown!), this system breaks down.</p>

<p>Let’s take a look at each of these three questions in turn and see what sort
of metrics and goals might be appropriate for each.</p>

<p><strong>How well are we managing our business?</strong></p>

<p>This seems like an obvious question to answer, but often it can take a back
seat to big ambitious goals. However, many parts of a business aren’t really
good targets for big ambitious goals. You may have a high level of customer
service and satisfaction you need to maintain. Or you may need to manage costs
down slightly. This sort of metric of “maintain a good status quo” or slight
improvement to some metrics isn’t a good fit for the “ambitious goals”
approach.</p>

<p>Instead, this is an area where you likely need to manage with a more
traditional KPI structure. If you can identify the key metrics that measure
your performance, you should be able to manage them to a target. The beauty of
KPIs (vs OKRs) is that you can have more of them, and they can be relatively
static. They are like health metrics for your business. Customer satisfaction,
margins, customer inquiry response times, etc. There is an art to picking the
right KPIs and the goal for each — but they are a good way to manage a
business that you understand pretty well.</p>

<p><strong>How well are we executing our plans?</strong></p>

<p>I think this is the question that often gets short shrift. OKRs and KPIs both
tend to focus on business-level metrics — or what are called output metrics.
They don’t really measure how well a team executed — especially not in the
short run. We’ve all seen teams that hit their metrics due to external
circumstances, despite failing to execute well on their initiatives. And we’ve
all seen the opposite.</p>

<p>Any company that wants to maintain a high level of operational excellence has
to be very effective and doing what it sets out to do — and unfortunately we
too often combine our measurement of that with the question “did our
initiatives have the anticipated impact?” That makes it hard for us to
honestly track how well we execute initiatives and be honest about how to get
better on this front. It also means that too often teams are rewarded (or not)
based on things outside their control.</p>

<p>I haven’t seen an amazing system for managing the execution of initiatives or
projects. OKRs and KPIs seem to take up a lot of air, but I don’t know the
equivalent acronym for this space (if anyone has one, please share it!).
However, I can identify a few key elements I think are critical for doing this
well.</p>

<ol>
  <li><strong>Focus on the deliverables</strong>. The whole purpose here is to separate strategy from execution. These measurements should be input-oriented and focused on what the team is delivering.</li>
  <li><strong>Set medium time frames.</strong> I’m not sure you need to stick to a strict, one-size fits all time frame like quarterly. But these should be projects between weeks and months. Long enough to be meaningful and challenging to execute, but short enough to be reasonably predictable.</li>
  <li><strong>Keep the scope contained.</strong> For the most part, teams should own the resources and dependencies they count on for this sort of project. That’s clearly not always possible, but it’s a good starting principle. This is a metric meant to manage execution — so make sure everyone who is responsible for the project buys into this.</li>
</ol>

<p><strong>How are we improving our business?</strong></p>

<p>In my history, this is the core question everyone is looking to use OKRs to
measure. How are we getting better? And the challenge is that it can give
short shrift to the management of more traditional KPIs and often doesn’t
tightly correlate to execution. I’m still a big fan of OKRs — I think having
some big goals about how a company needs to improve is critical to really
driving change.</p>

<p>And I think OKRs can be more focused on this purpose. We aren’t also trying to
use them to manage more stable parts of a business and measure our ability to
execute. Once we offload those tasks, we can really think about what big
things we want or need to change about our business and focus our OKRs on
that.</p>

<p>That’s it. That was my recent thought on metrics and measurement. Stop trying
to fit all of your goals into one system, and use different approaches to how
you measure the answer to these three very different questions!</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[I’ve had a lot of conversation about OKRs — and I’ve even written down my thoughts on the subject a few times. I’ve also spent countless hours debating what the right OKRs for a company or team are. As I’ve reflected on the struggle to capture the right OKRs, I realize there is really a significant tension between metrics that are measuring very different kinds of things.]]></summary></entry><entry><title type="html">Arc Browser</title><link href="http://jeffkeltner.com/2024/02/06/arc-browser.html" rel="alternate" type="text/html" title="Arc Browser" /><published>2024-02-06T00:00:00+00:00</published><updated>2024-02-06T00:00:00+00:00</updated><id>http://jeffkeltner.com/2024/02/06/arc-browser</id><content type="html" xml:base="http://jeffkeltner.com/2024/02/06/arc-browser.html"><![CDATA[<p>As an avowed gadget geek, one thing I’ve enjoyed about moving on from my role
at Upstart has been the freedom to install whatever tools I want on my laptop,
without having to negotiate with IT and InfoSec folks (you guys know I love
you, but it’s true…) . I know I could have done this earlier with a personal
laptop, but who wants to manage multiple laptops?!?! So, I’ve been playing
with some new toys recently and there is one I want to share — the <a href="https://arc.net/">Arc
Browser</a>. I spend most of my time on my computer in the
browser, so I was excited to see someone come out with a truly fresh take on
how it should work. Here are my favorite features:</p>

<!--more-->
<ul>
  <li><strong>Side Tab Bar.</strong> Like most people, I find that my screen often has more horizontal space than I need — but I’m always scrolling vertically. Moving the tab bar to the side of the window, vs the top, opens up more screen real estate for what I’m doing.</li>
  <li><strong>Hide the Tab Bar.</strong> You can also hide the tab bar with a quick -CMD-S- key stroke — freeing up even more space, and helping you (or at least me) stay focused on what you’re doing.</li>
  <li><strong>Limited “Chrome”.</strong> This was one of the killer features of Chrome on its release (and how it got its name). They just got ride of so much of the stuff around the browser to focus on what you were doing. Arc takes this to the next level (you can even remove the URL bar by default!). Having more space for my web apps/sites is really nice.</li>
  <li><strong>Spaces.</strong> This is probably the biggest departure in terms of regular use compared to other browser. Spaces are basically areas where you can have a set up of different open tabs and Bookmarks/Favorites (or Pinned Tabs, as Arc calls them). I found this incredibly useful. I’ve never been a huge fan of just leaving open tons of tabs to get to later. However, I do see the value of being able to leave open a few tabs for a project I might be working on. Now, I make that a space. This means I can leave all my open tabs, and even Pin a few tabs I know I will always need when working on this project. But when I switch to a different space, they all vanish and don’t clutter up my window. I have a General space with email, calendar, and a bunch of favorites. I have a news/reading space with Feedly and Pocket Pinned and open tabs for anything I was in the middle of but haven’t finished. I also have specific spaces for different projects I’m working on. I think of each as a context — and I can arrange my Space for that context — and then just leave it when I switch contexts. I have loved the workflow this feature has enabled!</li>
  <li>
    <p><strong>Preview Links.</strong> When you have a link in an email (or anywhere else, really), clicking it opens in a little preview sub-window without leaving your tab. If it’s a quick thing to read/finish, you just close it and you’re right back where you started. If it might take longer, you can fully open it (into any Space) and leave it there for later. Surprisingly saves a good amount of time.</p>
  </li>
  <li><strong>Chrome Extensions.</strong> This isn’t a reason to switch from Chrome, but it’s awesome to know that I can get all of my favorite Chrome extensions here — 1Password, Pocket ,etc</li>
  <li><strong>Video Mini Player</strong> If you’re watching a video and switch to a new tab, the video will pop out in a mini-player and just keep going. I find this incredibly useful in a number of contexts. Very smart execution here.</li>
</ul>

<p>If you do most of your work in browser and are curious about any of the above,
I recommend you give it a try. I’ve really enjoyed the experience of switching
to Arc, and honestly I’m just glad to see some real innovation around the
browser that acknowledges how much time we all spend in it and tries to make
that time a little bit more pleasant and efficient.</p>]]></content><author><name>Jeff Keltner</name></author><summary type="html"><![CDATA[As an avowed gadget geek, one thing I’ve enjoyed about moving on from my role at Upstart has been the freedom to install whatever tools I want on my laptop, without having to negotiate with IT and InfoSec folks (you guys know I love you, but it’s true…) . I know I could have done this earlier with a personal laptop, but who wants to manage multiple laptops?!?! So, I’ve been playing with some new toys recently and there is one I want to share — the Arc Browser. I spend most of my time on my computer in the browser, so I was excited to see someone come out with a truly fresh take on how it should work. Here are my favorite features:]]></summary></entry></feed>