<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Pixelflips</title>
	<atom:link href="https://pixelflips.com/feed?cat=-21" rel="self" type="application/rss+xml" />
	<link>https://pixelflips.com</link>
	<description>pixelflips - flippin&#039; ideas into creative and clean web and interface designs while keeping a focus on web standards.</description>
	<lastBuildDate>Mon, 13 Apr 2026 01:38:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">98725035</site>	<item>
		<title>Unregulated, Unaccountable, Unchecked.</title>
		<link>https://pixelflips.com/blog/unregulated-unaccountable-unchecked</link>
					<comments>https://pixelflips.com/blog/unregulated-unaccountable-unchecked#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 01:35:11 +0000</pubDate>
				<category><![CDATA[Personal]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10863</guid>

					<description><![CDATA[Everyone&#8217;s arguing about which AI tool writes the best code or generates the best images. Meanwhile, the companies building these systems are racing to ship faster, raise more, and consolidate more power than any tech cycle before them. And a small group of researchers, journalists, and engineers have been trying to get your attention about [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Everyone&#8217;s arguing about which AI tool writes the best code or generates the best images. Meanwhile, the companies building these systems are racing to ship faster, raise more, and consolidate more power than any tech cycle before them. And a small group of researchers, journalists, and engineers have been trying to get your attention about why that should concern you.</p>



<span id="more-10863"></span>



<p>Most people aren&#8217;t listening.</p>



<p>I&#8217;m not talking about AI taking your job. I&#8217;m talking about the people asking whether we even understand what we&#8217;re building, whether we can control it, and what happens when the people in charge care more about valuation than safety. These aren&#8217;t fringe voices. Some of them literally built the foundations of modern AI.</p>



<p>And they&#8217;re worried.</p>



<p>This post is my attempt to point you toward the people and organizations I think are worth your time. I don&#8217;t agree with every single take on this list. But these are the conversations that actually matter, and most people aren&#8217;t even aware they&#8217;re happening.</p>



<h2 class="wp-block-heading">The race to the bottom</h2>



<p>The AI industry has no meaningful oversight. No regulatory body audits these models before they ship. No accountability when something goes wrong. The companies building the most powerful systems anyone&#8217;s ever built are essentially self-governing, and they&#8217;re doing exactly what you&#8217;d expect self-governing corporations to do: whatever is fastest and most profitable.</p>



<p>Ronan Farrow and Andrew Marantz just published <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted" target="_blank" rel="noreferrer noopener nofollow">a piece in The New Yorker</a> about Sam Altman, built on over a hundred interviews and internal documents. A board member quoted saying, &#8220;I don&#8217;t think Sam is the guy who should have his finger on the button.&#8221; It&#8217;s one company, but it&#8217;s not one company&#8217;s problem. The entire industry is in an arms race where safety is a line item, and speed is the product. The people on this list have been saying this for years. Testifying before Congress. Writing open letters. Quitting their jobs to speak freely. And still, nothing has changed structurally.</p>



<p>That has to change.</p>



<h2 class="wp-block-heading">The ones sounding the alarm</h2>



<p><strong>Geoffrey Hinton</strong> is often called the godfather of deep learning. He spent decades building the neural network architectures that power the AI systems we use today. Then he left Google specifically so he could speak freely about the risks. <strong>Yoshua Bengio</strong>, the other godfather of deep learning, has done the same. He&#8217;s publicly called for regulation and warned about existential risk. When two of the three people most responsible for modern AI are telling you something is wrong, that&#8217;s not alarmism. That&#8217;s a signal.</p>



<p><strong>Eliezer Yudkowsky</strong> has been writing about AI alignment for over twenty years. Long before &#8220;alignment&#8221; was a word anyone in Silicon Valley bothered with. He&#8217;s polarizing. He&#8217;s uncompromising. He will test your patience. But he&#8217;s been more right about the trajectory of AI development than most of the people who dismissed him, and his writing on the alignment problem is some of the clearest thinking on the subject that exists.</p>



<p><strong>Roman Yampolskiy</strong> is an AI safety researcher whose core argument is blunt: we have no proof that AI can be controlled, and we&#8217;re building it anyway. Not &#8220;we haven&#8217;t figured it out yet.&#8221; We have no proof it&#8217;s even possible. That distinction matters, and most people haven&#8217;t sat with it long enough.</p>



<p><strong>Connor Leahy</strong> brings something different to this conversation: urgency without the academic distance. He founded Conjecture and now leads <a href="https://controlai.com/" target="_blank" rel="noreferrer noopener">Control AI</a>, and he&#8217;s one of the younger voices in this space pushing hard on governance and policy. He&#8217;s not waiting for consensus. He&#8217;s building the case for action now.</p>



<p><strong>Timnit Gebru</strong> co-authored a research paper on the risks of large language models while at Google. Google fired her for it. Let that sit for a second. One of the companies building these systems fired a researcher for documenting the harms. She went on to found the DAIR Institute and has continued the work, but her story tells you everything you need to know about how seriously these companies take internal criticism.</p>



<h2 class="wp-block-heading">The ones making it make sense</h2>



<p>The researchers above are doing critical work, but let&#8217;s be honest, most people aren&#8217;t going to read a paper on AI containment theory. That&#8217;s where communicators come in.</p>



<p><strong>Tristan Harris</strong> made tech ethics a mainstream conversation with The Social Dilemma. Through the Center for Humane Technology, he&#8217;s been one of the most effective people at translating abstract tech risks into something your parents can understand. His focus has shifted heavily toward AI, and nobody&#8217;s been better at framing these issues for a general audience.</p>



<p><strong>Hannah Fry</strong> is a mathematician who communicates the implications of AI and algorithms without dumbing anything down. She holds complexity and clarity at the same time, which is harder than it sounds, and her work is a good starting point if the more technical voices feel overwhelming.</p>



<p><strong>Karen Hao</strong> is a journalist who has done some of the best investigative reporting on AI&#8217;s real-world impact, from bias in facial recognition to the human labor behind &#8220;automated&#8221; systems. Her work at MIT Technology Review and The Atlantic has shown what happens when these systems meet actual people, and it&#8217;s not the clean story the press releases tell.</p>



<h2 class="wp-block-heading">Where to start</h2>



<p>If you&#8217;re reading this and thinking &#8220;okay, but what do I actually do,&#8221; here are some places to start:</p>



<p>Watch <a href="https://www.imdb.com/title/tt39150120/" target="_blank" rel="noreferrer noopener"><em>The AI Doc: Or How I Became an Apocaloptimist</em></a>. Daniel Roher&#8217;s documentary came out earlier this year and features several of the people on this list. It&#8217;s not perfect, but it&#8217;ll get you up to speed faster than anything else on this list.</p>



<p><a href="https://www.thehumanmovement.org/" target="_blank" rel="noreferrer noopener">The Human Movement</a> is focused on public awareness and grassroots mobilization. If you&#8217;ve never engaged with AI safety beyond the occasional headline, this is a good front door.</p>



<p><a href="https://humancompatible.ai/" target="_blank" rel="noreferrer noopener">Human Compatible AI</a> is Stuart Russell&#8217;s research center at UC Berkeley, working on the technical side of building AI systems that are actually aligned with human values. Russell wrote the textbook on AI, the one used in most university courses, and this is where he&#8217;s putting his energy.</p>



<p><a href="https://controlai.com/" target="_blank" rel="noreferrer noopener">Control AI</a> tackles policy and governance. If you want to understand what regulatory frameworks might actually look like and why they matter now, not later, start here.</p>



<h2 class="wp-block-heading">This isn&#8217;t a doom post</h2>



<p>I use AI every day. I build with it. I&#8217;m not anti-AI, and I&#8217;m not going to pretend otherwise.</p>



<p>But I am paying attention to the people who understand these systems at a fundamental level and are saying, clearly and repeatedly, that we need to slow down and think about what we&#8217;re doing. The fact that most people can&#8217;t name a single person on this list bothers me. These aren&#8217;t obscure academics yelling into the void. They&#8217;re some of the most accomplished people in AI, ethics, and tech journalism, and the companies building these systems are not listening to them.</p>



<p>The least you can do is start listening. And if you&#8217;re already listening and have others to add to the list, let me know! </p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/unregulated-unaccountable-unchecked/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10863</post-id>	</item>
		<item>
		<title>It&#8217;s Friday. We&#8217;re Tired.</title>
		<link>https://pixelflips.com/blog/its-friday-were-tired</link>
					<comments>https://pixelflips.com/blog/its-friday-were-tired#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 23:04:56 +0000</pubDate>
				<category><![CDATA[Design & Dev]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10835</guid>

					<description><![CDATA[It&#8217;s 4 pm on a Friday. I&#8217;ve approved my last PR of the week. I&#8217;ve reviewed more code today than I wrote. I got a lot done, or at least, the dashboard says I did. But I&#8217;m sitting here, and I can&#8217;t think straight. Not in the dramatic, existential-crisis way. Just&#8230; tired. A kind of [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>It&#8217;s 4 pm on a Friday. I&#8217;ve approved my last PR of the week. I&#8217;ve reviewed more code today than I wrote. I got a lot done, or at least, the dashboard says I did. But I&#8217;m sitting here, and I can&#8217;t think straight. Not in the dramatic, existential-crisis way. Just&#8230; tired. A kind of tired I didn&#8217;t use to feel from this job.</p>



<span id="more-10835"></span>



<p>I use AI every day. I&#8217;m not performing some Luddite rebellion from my standing desk. Claude is open. I reach for it constantly, and it helps. That&#8217;s not the issue.</p>



<p>The issue is what happened to the rest of the job while nobody was looking.</p>



<h2 class="wp-block-heading">The Shift Nobody Named</h2>



<p>Writing code is a generative act. You&#8217;re building something. You&#8217;re in flow, that state where two hours vanish, and you look up, and the thing works, and you feel good. That used to be most of the day.</p>



<p>Now most of the day is evaluation. Reviewing AI-generated output. Reverse-engineering logic that came from somewhere outside your own head, except the &#8220;somewhere&#8221; isn&#8217;t even a colleague you can ask questions to. It&#8217;s a model that was very confident and very fast and may or may not have understood what it was doing.</p>



<p>These are two different cognitive gears. Generation gives you energy. Evaluation takes it. And the human brain isn&#8217;t built for what we&#8217;re asking it to do right now.</p>



<p>Working memory holds about <a href="https://codecondo.com/cognitive-overload-developer-performance/" target="_blank" rel="noreferrer noopener nofollow">four to five items at once</a>. That&#8217;s it. Every PR you review asks you to load up a new context, different feature, different assumptions, different patterns, and hold it all in your head long enough to decide if it&#8217;s right. Do that a few times, and you&#8217;re fine. Do it dozens of times a day and you&#8217;re not reviewing anymore. You&#8217;re reacting.</p>



<p>There&#8217;s a name for this: <a href="https://super-productivity.com/blog/decision-fatigue-for-developers/" target="_blank" rel="noreferrer noopener nofollow">decision fatigue</a>. Your ability to make good judgments degrades with every decision you make. By afternoon, fatigued reviewers are approving things with an &#8220;LGTM&#8221; that their morning selves would have flagged. Not because they stopped caring. Because the brain ran out of capacity to care with precision.</p>



<p>We&#8217;re stuck in this mode for hours at a time now, context-switching across PRs that all look plausible but need real attention to verify. Developers on high AI-adoption teams are <a href="https://devops.com/the-ai-productivity-paradox-how-developer-throughput-can-stall/" target="_blank" rel="noreferrer noopener nofollow">merging 98% more pull requests</a>, but PR review time increased 91%. The volume doubled. The brain didn&#8217;t.</p>



<p>You can&#8217;t maintain the same standard of care at twice the volume. Something has to give. And right now, what&#8217;s giving is us.</p>



<h2 class="wp-block-heading">The Dashboards Look Great</h2>



<p>Here&#8217;s the part that makes it worse: from the outside, everything looks like it&#8217;s working. More PRs merged. More tickets closed. Velocity is up. Leadership sees acceleration and assumes the team is thriving.</p>



<p>What they don&#8217;t see is that the humans behind those numbers are absorbing the cognitive cost of all that speed. Fortune reported that AI gave workers roughly <a href="https://fortune.com/2026/03/10/ai-productivity-workers-workday-efficiency/" target="_blank" rel="noreferrer noopener nofollow">six extra hours a week</a>. Nobody got to keep them. The workload expanded to fill every minute that got freed up. That&#8217;s not productivity. That&#8217;s a treadmill that got faster.</p>



<p>And I get it. If you&#8217;re a VP staring at a Jira board that&#8217;s never looked greener, why would you question it? The metrics are saying exactly what you want to hear. But the metrics don&#8217;t measure the developer who stopped reading PRs line by line because there are just too many. They don&#8217;t measure the senior engineer who used to catch subtle architectural issues but now has to triage so much volume that she&#8217;s pattern-matching instead of thinking. They don&#8217;t measure the mass-approval that happens at 4:47 pm on a Friday because everyone&#8217;s cooked.</p>



<p>Addy Osmani called this <a href="https://addyosmani.com/blog/comprehension-debt/" target="_blank" rel="noreferrer noopener nofollow">&#8220;comprehension debt&#8221;</a>, when teams ship code faster than they can understand it. I&#8217;d call it something simpler.</p>



<p>We&#8217;re tired.</p>



<h2 class="wp-block-heading">Still Here, Still Caring</h2>



<p>This isn&#8217;t a post about quitting. It&#8217;s not about checking out or going back to writing everything by hand. It&#8217;s about the exhaustion that hits people who are still trying to do good work inside a system that keeps raising the tempo without asking if anyone can keep up.</p>



<p>The developers who still read every line of a PR. Who still flags the thing that &#8220;probably works fine&#8221; but doesn&#8217;t sit right. Who are tired because caring takes more effort than it used to.</p>



<p>TechCrunch ran a piece in February with a headline that stuck with me: <a href="https://techcrunch.com/2026/02/09/the-first-signs-of-burnout-are-coming-from-the-people-who-embrace-ai-the-most/" target="_blank" rel="noreferrer noopener nofollow">&#8220;The first signs of burnout are coming from the people who embrace AI the most.&#8221;</a> Read that again. The ones burning out aren&#8217;t the skeptics. They&#8217;re the ones who leaned in. The ones who did the thing everyone told them to do.</p>



<p><a href="https://stackoverflow.blog/2026/02/18/closing-the-developer-ai-trust-gap/" target="_blank" rel="noreferrer noopener nofollow">38% of developers</a> say reviewing AI-generated code is harder than reviewing code from a colleague. Honestly? That tracks. When a teammate writes something, there&#8217;s a conversation behind it. A shared context. A Slack thread. When AI writes something, you&#8217;re on your own with a diff and a gut feeling.</p>



<p>It&#8217;s Friday. We did the work. We reviewed the code. We caught the things that needed catching. We&#8217;re tired. And on Monday, we&#8217;ll do it again, because that&#8217;s what giving a damn looks like.</p>



<p>But someone should probably notice.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/its-friday-were-tired/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10835</post-id>	</item>
		<item>
		<title>stockpile: know what&#8217;s in your arsenal</title>
		<link>https://pixelflips.com/blog/stockpile-know-whats-in-your-arsenal</link>
					<comments>https://pixelflips.com/blog/stockpile-know-whats-in-your-arsenal#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 00:07:49 +0000</pubDate>
				<category><![CDATA[Design & Dev]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10792</guid>

					<description><![CDATA[At some point, I stopped knowing what was on my Mac. A tool here, a dependency my agent dropped in there. Collectively, a mystery. I was losing track. To help, I created stockpile. Not all at once. It was gradual. A tool I installed to try out six months ago. A dependency that an agent [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>At some point, I stopped knowing what was on my Mac. A tool here, a dependency my agent dropped in there. Collectively, a mystery. I was losing track. To help, I created stockpile.</p>



<span id="more-10792"></span>



<p>Not all at once. It was gradual. A tool I installed to try out six months ago. A dependency that an agent dropped in to get something working. A gem that showed up on a backup restore from a machine I barely remember using. Each one was perfectly reasonable at the time. Collectively, a mystery.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/product-image.png?ssl=1"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="846" height="520" src="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/product-image.png?resize=846%2C520&#038;ssl=1" alt="Results view after running the stockpile script" class="wp-image-10795" srcset="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/product-image.png?w=846&amp;ssl=1 846w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/product-image.png?resize=300%2C184&amp;ssl=1 300w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/product-image.png?resize=768%2C472&amp;ssl=1 768w" sizes="(max-width: 846px) 100vw, 846px" /></a><figcaption class="wp-element-caption">Results view after running the <a href="https://stockpile.pixelflips.com" data-type="link" data-id="https://stockpile.pixelflips.com" target="_blank" rel="noreferrer noopener">stockpile script</a>.</figcaption></figure>



<p>I had no idea what was actually installed on my computer, and for a while, I just lived with that.</p>



<h2 class="wp-block-heading">It wasn&#8217;t always agents</h2>



<p>Package sprawl is older than AI. It starts the minute you follow a tutorial that tells you to <code>npm install -g</code> do something you&#8217;ll use once and never think about again. Then you restore from a backup without auditing what came with it. A new language ecosystem enters your workflow and brings its own installer, its own conventions, and its own quiet list of things it decided you need.</p>



<p>The problem isn&#8217;t that packages get installed. It&#8217;s that there&#8217;s no master list. No single place to go and say: here&#8217;s everything, where it came from, what it does, and whether I actually still need it.</p>



<p>AI agents made this worse. Genuinely worse. Not because they&#8217;re doing anything wrong (they&#8217;re doing exactly what you asked), but because they move fast and they install quietly. You tell an agent to get something working, and it will, and three deploys later, you&#8217;re staring at a global npm package you don&#8217;t recognise and trying to remember which conversation put it there.</p>



<h2 class="wp-block-heading">Too many managers, zero answers</h2>



<p>On a Mac, the actual audit goes something like this. You check Homebrew. Then casks. Then, the global npm packages. Then pip. Then Ruby gems, and by the way, are those system gems or gems you installed? Then Cargo binaries. Then the Mac App Store, which <em>technically</em> has a command-line interface if you know where to find it.</p>



<p>A lot of package managers. Different commands, different output formats. At no point do they talk to each other.</p>



<p>I kept a mental model of my machine that was somewhere between optimistic and wrong. I&#8217;d run <code>brew list</code>, feel briefly in control, and then remember I hadn&#8217;t checked npm in months. It wasn&#8217;t a workflow. It was archaeology.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/dialog-image.png?ssl=1"><img data-recalc-dims="1" decoding="async" width="840" height="520" src="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/dialog-image.png?resize=840%2C520&#038;ssl=1" alt="Details popup of a package in stockpile" class="wp-image-10796" srcset="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/dialog-image.png?w=840&amp;ssl=1 840w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/dialog-image.png?resize=300%2C186&amp;ssl=1 300w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/dialog-image.png?resize=768%2C475&amp;ssl=1 768w" sizes="(max-width: 840px) 100vw, 840px" /></a><figcaption class="wp-element-caption">Details pop-up of of package details in <a href="https://stockpile.pixelflips.com" data-type="link" data-id="https://stockpile.pixelflips.com" target="_blank" rel="noreferrer noopener">stockpile</a></figcaption></figure>



<h2 class="wp-block-heading">A weekend and an idea</h2>



<p>Here&#8217;s what I want to say about AI that I don&#8217;t say enough: it has genuinely changed what one person can build alone in a short stretch of time.</p>



<p>Stockpile was a back-burner idea for a long time. The kind of thing where you know exactly what you want, you can picture how it should work, and you also know it&#8217;ll take longer than you have. So it sits there.</p>



<p>This one didn&#8217;t sit. I built it in a weekend. Not because AI wrote it for me. I made every architectural decision, debugged every edge case, and argued with the output constantly. But the friction that usually kills side projects before they start just wasn&#8217;t there. The gap between &#8220;I want this thing to exist&#8221; and &#8220;this thing exists&#8221; is closing in a way that&#8217;s hard to overstate if you build things on your own.</p>



<p>That part still surprises me, honestly.</p>



<h2 class="wp-block-heading">So I built the thing</h2>



<p>Stockpile is a bash script. You run it once. It scans all seven package managers, assembles the results, and opens a browser-based dossier of everything on your machine: every package, every version, install paths, descriptions, and run commands. No server. No account. Nothing leaves your machine.</p>



<p>That last part mattered to me. There&#8217;s no reason a tool like this needs a cloud component. Everything is local. The name felt right, too: a stockpile is yours. You should know what&#8217;s in it.</p>



<p>I built it because I needed it. I use it now. And it turns out the first time you run it and see everything laid out in one place, the feeling is somewhere between satisfaction and mild alarm.</p>



<p>You&#8217;ll find things you forgot about. Probably some things you don&#8217;t recognise. Maybe a few things your agents installed that you never explicitly asked for.</p>



<p>It&#8217;s free and open source. If your machine needs an audit, <a href="https://stockpile.pixelflips.com" target="_blank" rel="noreferrer noopener">stockpile is ready when you are</a>. I&#8217;d genuinely love to know what you find.</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/stockpile-know-whats-in-your-arsenal/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10792</post-id>	</item>
		<item>
		<title>So I Put My AI Skills in a Marketplace</title>
		<link>https://pixelflips.com/blog/so-i-put-my-ai-skills-in-a-marketplace</link>
					<comments>https://pixelflips.com/blog/so-i-put-my-ai-skills-in-a-marketplace#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Sat, 21 Mar 2026 03:13:21 +0000</pubDate>
				<category><![CDATA[Design & Dev]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10782</guid>

					<description><![CDATA[Everyone&#8217;s building custom skills for their AI coding assistants right now. Commit helpers, linting rules, project scaffolding. Little markdown files that make Claude Code do things your way instead of the default way. I&#8217;ve been doing the same thing. But the more skills I wrote, the more I wanted them to follow me between machines [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Everyone&#8217;s building custom skills for their AI coding assistants right now. Commit helpers, linting rules, project scaffolding. Little markdown files that make Claude Code do things your way instead of the default way. I&#8217;ve been doing the same thing. But the more skills I wrote, the more I wanted them to follow me between machines and projects without copy-pasting files around.</p>



<span id="more-10782"></span>



<p>So I built a personal marketplace. A private GitHub repo that acts as my own skill distribution system. I install it on a new machine, pick the skills I want, and everything works the way I expect it to. No setup rituals, no re-explaining how I like things done.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/claude-code-marketplace.jpg?ssl=1"><img data-recalc-dims="1" decoding="async" width="730" height="438" src="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/claude-code-marketplace.jpg?resize=730%2C438&#038;ssl=1" alt="" class="wp-image-10785" srcset="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/claude-code-marketplace.jpg?w=730&amp;ssl=1 730w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/claude-code-marketplace.jpg?resize=300%2C180&amp;ssl=1 300w" sizes="(max-width: 730px) 100vw, 730px" /></a></figure>



<p>Here&#8217;s what that looks like and why I think more people should be doing this.</p>



<h2 class="wp-block-heading">Same session, different day</h2>



<p>Some skills are universal. I want my commit format everywhere. Others only matter in context, like how my Obsidian vault is structured or where session logs go. And I keep moving between machines, so anything stored locally is one laptop swap away from gone. The universal stuff needs to be portable. The project-specific stuff needs to stay out of the way when it&#8217;s not relevant.</p>



<p>Claude Code has memory and CLAUDE.md files that help with some of this. But there&#8217;s a gap between &#8220;remembering facts&#8221; and &#8220;knowing how to do something your way.&#8221; Memory stores that you prefer conventional commits. A skill actually enforces the format, stages the right files, and asks you to confirm (or not) before committing. One is a note on a whiteboard. The other is a workflow.</p>



<p>That distinction matters. The more workflows you encode, the less prompting you do. And the less prompting you do, the more time you spend on the actual work.</p>



<h2 class="wp-block-heading">Your repo is the marketplace</h2>



<p>The setup is almost comically simple. Anthropic added <a href="https://code.claude.com/docs/en/plugin-marketplaces" target="_blank" rel="noreferrer noopener nofollow">plugin and marketplace support</a> to Claude Code, and all it takes is a GitHub repo with a <code>marketplace.json</code> file and a folder per plugin. Each plugin contains a <code>SKILL.md</code> markdown file with instructions that Claude loads when the skill triggers. That&#8217;s the whole thing.</p>



<p>Here&#8217;s what the structure looks like:</p>



<pre class="wp-block-code"><code>your-marketplace/
├── .claude-plugin/marketplace.json
└── skills/
    ├── your-skill-one/
    │   ├── .claude-plugin/plugin.json
    │   └── skills/your-skill-one/
    │       └── SKILL.md
    └── your-skill-two/
        └── ...</code></pre>



<p>You register it as a marketplace in Claude Code, install whichever plugins you want, and they show up as slash commands. On a new machine, you add the marketplace and install your skills. Done.</p>



<p>The skills themselves are just instructions. There&#8217;s no SDK, no special syntax. You write what you want Claude to do in markdown, and it does it. If you can write a good prompt, you can write a skill.</p>



<h2 class="wp-block-heading">What I built in a weekend</h2>



<p>I&#8217;m not going to walk you through how to build each one. But I want to give you a sense of the kinds of workflows you can turn into skills, because the range is wider than you&#8217;d expect.</p>



<p><strong>Vault setup.</strong> I use Obsidian as a personal knowledge base. This skill scaffolds the entire vault structure on a new machine. Directories, templates, config files, and the CLAUDE.md that tells Claude how to navigate the vault. I run it once per machine and never think about it again.</p>



<p><strong>Session saving.</strong> At the end of every coding session, Claude automatically saves what happened to my Obsidian vault. Plans updated, decisions made, lessons learned, files changed. It appends to the project&#8217;s notes, writes a dated session log, and does it without me asking. I added an instruction to my global CLAUDE.md that tells Claude to invoke this skill at the session end. It just happens now.</p>



<p><strong>New project.</strong> When I start a new project, I say &#8220;new vault project,&#8221; and Claude creates a folder from a template. plan.md for goals and status, notes.md for decisions and learnings, a sessions directory for logs. Consistent structure across every project without me remembering what goes where.</p>



<p><strong>Conventional commits.</strong> I wanted Claude to use a specific commit format: <code>type(scope): message</code>. Not a linter, not a git hook. Just Claude, knowing how I commit and doing it right every time. The skill checks the diff, proposes a message in the right format, and waits for me to confirm. Forty lines of markdown replaced a tool I would have had to install, configure, and maintain.</p>



<p><strong>Blog post research.</strong> I <a href="https://pixelflips.com/blog/let-me-reintroduce-myself">decided to write more this year</a>, and this skill has made it easier to stick to. It lets me bounce topic ideas off something, searches for what&#8217;s trending in my area, suggests angles I might not have considered, and pulls together an outline I can react to. When a post is ready, it even publishes a draft straight to my WordPress site. Saves me an hour of tab-hopping before I&#8217;ve even started writing.</p>



<p>Those are just a few from my marketplace. All markdown. All in one repo. But they cover enough ground to show what&#8217;s possible.</p>



<h2 class="wp-block-heading">The compound effect</h2>



<p>The interesting thing isn&#8217;t any single skill. It&#8217;s what happens when they stack.</p>



<p>Claude saves my session, which means next time it can read what happened last time. It uses my commit format, which means I never review a commit message, wondering what format to use. It knows my vault structure, which means every project starts the same way and every session ends with a clean log.</p>



<p>Each skill removes one more thing I used to do manually. And because they&#8217;re all in the same marketplace repo, I can install the full set on any machine in about two minutes.</p>



<p>I&#8217;ve been thinking of it like this: most people configure their AI tools. Preferences, settings, maybe a CLAUDE.md file. That&#8217;s fine. But building skills is a different thing entirely. You&#8217;re teaching the AI how you work, not just what you like. The more you encode, the less friction there is between having an idea and executing it.</p>



<h2 class="wp-block-heading">Maybe you should, too</h2>



<p>I&#8217;m not saying everyone needs five custom skills and an Obsidian vault. But if you use an AI coding assistant daily and you keep repeating yourself, you already have the raw material for a skill. That commit format you keep correcting? That&#8217;s a skill. How do you structure new projects? Skill. The thing you do at the end of every session that you keep forgetting? Definitely a skill.</p>



<p>The whole point of working with AI is that it&#8217;s programmable. Not in the traditional sense. You don&#8217;t write functions and import libraries. You write instructions in plain language, and the AI follows them. If you&#8217;re going to use these tools every day, you might as well build the workflows that make them actually yours.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/so-i-put-my-ai-skills-in-a-marketplace/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10782</post-id>	</item>
		<item>
		<title>Three AIs Missed It. One Human Didn&#8217;t</title>
		<link>https://pixelflips.com/blog/three-ais-missed-it-one-human-didnt</link>
					<comments>https://pixelflips.com/blog/three-ais-missed-it-one-human-didnt#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 00:01:36 +0000</pubDate>
				<category><![CDATA[Design & Dev]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10741</guid>

					<description><![CDATA[I had done everything right. Or at least, everything the current playbook says to do. Used AI to fill the gaps, reviewed it myself, then handed it off to more AI. This past week, I got handed a project that required backend work. Not because I have backend experience &#8211; I don&#8217;t, really &#8211; but [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I had done everything right. Or at least, everything the current playbook says to do. Used AI to fill the gaps, reviewed it myself, then handed it off to more AI.</p>



<span id="more-10741"></span>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/code-help.jpg?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="902" height="547" src="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/code-help.jpg?resize=902%2C547&#038;ssl=1" alt="" class="wp-image-10749" srcset="https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/code-help.jpg?w=902&amp;ssl=1 902w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/code-help.jpg?resize=300%2C182&amp;ssl=1 300w, https://i0.wp.com/pixelflips.com/wp-content/uploads/2026/03/code-help.jpg?resize=768%2C466&amp;ssl=1 768w" sizes="auto, (max-width: 902px) 100vw, 902px" /></a></figure>



<p>This past week, I got handed a project that required backend work. Not because I have backend experience &#8211; I don&#8217;t, really &#8211; but because &#8220;AI can handle the parts you don&#8217;t know.&#8221; And honestly, that&#8217;s not entirely wrong. Claude Code got me through it. I wrote what I could, leaned on it for what I couldn&#8217;t, and when the dust settled, the code was in decent shape. I ran my own review. Frontend looked solid. A few tweaks. Tests passed. Manual walkthrough, nothing glaring.</p>



<p>I pushed the branch, opened the PR, and handed it to Cursor for a second pass. A couple of issues surfaced. Fixed them. Then CodeRabbitAI got its turn. A couple more. Fixed those too. Three layers of review, every issue resolved. I tagged my human colleagues, feeling pretty good about the whole thing.</p>



<p>That&#8217;s where the confidence ended.</p>



<h2 class="wp-block-heading">The Comment That Made Me Feel Two Things at Once</h2>



<p>My colleague, a full-stack engineer with real backend experience, left a comment on the PR. They knew going in that I didn&#8217;t have much backend context, so they didn&#8217;t just tell me what was wrong. Instead, they asked me to try something: open the app in two windows, change profiles in one, make updates in the other, try to update and save, and see what happens.</p>



<p>I did it. The profiles fell out of sync.</p>



<p>It was immediately obvious, once I saw it, that it was the kind of thing where you go &#8220;oh, of course&#8221; the second it breaks. But I never would have thought to test that scenario on my own. I hadn&#8217;t spent years debugging backend state. I didn&#8217;t have the mental model for what could go wrong when a user has two sessions running simultaneously and starts mixing state between them. That&#8217;s hard-won intuition, built from experience I just don&#8217;t have yet. No AI tool flagged it either. Not Claude Code, not Cursor, not CodeRabbitAI. The bug just sat there, quiet, untouched by every layer of &#8220;smart&#8221; review I&#8217;d thrown at it.</p>



<p>What I appreciated most was how my colleague handled it. They didn&#8217;t write the fix in the comment. They didn&#8217;t tell me the answer. They trusted me to find it once I could see it. That&#8217;s a specific kind of mentorship, the kind that respects your ability to learn while acknowledging the gap. I was humbled and genuinely grateful at the same time, which is a weird combination but also kind of the best possible outcome of a code review.</p>



<h2 class="wp-block-heading">The Myth of the Foolproof Stack</h2>



<p>I&#8217;m not anti-AI. I use it every day, and I&#8217;m not going to pretend otherwise. But I want to be honest about what that PR showed me, because I don&#8217;t think the experience is unique to me.</p>



<p>We&#8217;ve been sold a version of AI-assisted development where layering more tools means fewer things fall through the cracks. More coverage, fewer blind spots, better outcomes. And when you&#8217;re in the middle of it, running your code through three different review passes, watching issues get flagged and resolved, it genuinely feels airtight. The pitch is clean. The reality, as it turns out, is messier.</p>



<p>Here&#8217;s the part that got me: CodeRabbit, one of the tools I used on that very PR, <a href="https://www.coderabbit.ai/blog/why-2025-was-the-year-the-internet-kept-breaking-studies-show-increased-incidents-due-to-ai" target="_blank" rel="noreferrer noopener">called 2025 &#8220;the year the internet broke.&#8221;</a> Change failure rates up 30%. Incidents per pull request up 23.5%. Their own data, about the exact category of tool I was using, to feel safe. A separate <a href="https://www.techrepublic.com/article/ai-generated-code-outages/" target="_blank" rel="noreferrer noopener">2026 Sonar survey</a> found that teams using AI coding tools are 46% more likely to experience production incidents than those that don&#8217;t. That&#8217;s not a marginal difference.</p>



<p><a href="https://www.theregister.com/2026/02/20/amazon_denies_kiro_agentic_ai_behind_outage/" target="_blank" rel="noreferrer noopener">Amazon&#8217;s internal AI agent, Kiro,</a> autonomously decided the fix for a minor issue was to delete and rebuild a production environment. Thirteen-hour outage in an AWS. Amazon called it &#8220;user error.&#8221; Sure, ok. <a href="https://www.theregister.com/2025/10/29/forrester_ai_rehiring/" target="_blank" rel="noreferrer noopener">Forrester</a> now predicts that 50% of companies that attributed headcount reductions to AI will quietly rehire for those same roles by 2027. <a href="https://www.cnbc.com/2025/12/19/google-boomerang-year-20percent-ai-software-devs-hired-2025-ex-employees.html" target="_blank" rel="noreferrer noopener">Google&#8217;s own numbers</a> show 20% of their 2025 AI engineering hires were former employees they&#8217;d previously let go.</p>



<p>That&#8217;s not optimization. That&#8217;s a very expensive loop.</p>



<h2 class="wp-block-heading">The Short Stick</h2>



<p>I understand AI is getting better. The models are improving, the tooling is maturing, and some of these failure modes will happen less as that continues. I genuinely believe that. I&#8217;m not writing this as someone who thinks we should go back to writing everything by hand.</p>



<p>But we&#8217;re living in the meantime. And in the meantime, companies are cutting engineers based on anticipated future capabilities, not demonstrated current ones. They&#8217;re shipping AI-reviewed code into production, watching things break, and then quietly hiring people back with no acknowledgment that the original bet didn&#8217;t pay off. The people who got laid off aren&#8217;t getting apologies or reinstatement. They&#8217;re getting contractor gigs at lower pay if they&#8217;re lucky. The idea that humans are optional is already starting to crack under its own evidence, but the cost of that lesson is being absorbed by the people who can least afford it.</p>



<p>The most valuable thing in that PR wasn&#8217;t a tool. It was a colleague who understood what I didn&#8217;t know, cared enough to teach me instead of just writing the fix themselves, and trusted me to get there once I could see the problem. That&#8217;s not something I can prompt my way to. No model is going to replicate that specific kind of knowledge transfer, built from real experience, offered with actual generosity.</p>



<p>AI caught some bugs on that PR. A human caught the one that mattered.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/three-ais-missed-it-one-human-didnt/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10741</post-id>	</item>
		<item>
		<title>AI Chose Your UI (Did It Choose Wrong?)</title>
		<link>https://pixelflips.com/blog/ai-chose-your-ui-did-it-choose-wrong</link>
					<comments>https://pixelflips.com/blog/ai-chose-your-ui-did-it-choose-wrong#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Sat, 07 Mar 2026 21:11:42 +0000</pubDate>
				<category><![CDATA[Design Systems]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10695</guid>

					<description><![CDATA[After years of building, maintaining, and supporting in-house design systems with real tokens, governance, versioning, support, and contribution models, I recently found myself building with Tailwind and shadcn. Thanks, AI! I&#8217;m familiar but relatively newish to these tools, so take this for what it is. I&#8217;m not anti-AI. I use it every day. But I&#8217;ve [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>After years of building, maintaining, and supporting in-house design systems with real tokens, governance, versioning, support, and contribution models, I recently found myself building with Tailwind and shadcn. Thanks, AI!</p>



<span id="more-10695"></span>



<p>I&#8217;m familiar but relatively newish to these tools, so take this for what it is. I&#8217;m not anti-AI. I use it every day. But I&#8217;ve been watching something happen across the industry that I think is worth talking about: It feels like AI is making our UI decisions for us, and we&#8217;re just going along with it. Maybe call me old school, but I think products should have their own soul when it comes to UI and UX. Otherwise, what actually sets us apart?</p>



<p>These days, someone fires up a vibe coding tool and builds a prototype. Could be a manager who&#8217;s excited about AI, a designer who started dabbling in code, a dev who wants to move fast. It comes back looking polished. React, Tailwind, shadcn, the whole default stack. It feels fast. It looks professional. And everyone accepts it. The UI direction, the component approach, the framework, the styling, all of it decided by whatever the AI defaulted to that day. By a language model optimizing for the most common output.</p>



<p>From what I can tell, this is happening more and more. And I don&#8217;t blame anyone for being excited. These tools can be fun to use. Designers can build something real without waiting on engineering. Decision-makers can see a working prototype in hours instead of weeks. That feels like progress, and in some ways, maybe it is. But there&#8217;s a difference between using AI as a tool and letting AI make the decisions. Right now, a lot of people are doing the second thing and calling it the first.</p>



<h2 class="wp-block-heading">Your product now looks like everyone else&#8217;s</h2>



<p>When you let AI decide your UI, everything converges. You can spot a shadcn app from across the room. The spacing, the rounded everything, the muted grays, the specific way the buttons and inputs feel. It&#8217;s the default aesthetic. Every vibe coding tool out there generates the same stack, so every prototype comes back looking like a cousin of the last one. That&#8217;s what happens when a model decides.</p>



<p>And there&#8217;s a feedback loop making it worse. AI generates shadcn code. More shadcn code ends up in training data. AI gets even better at generating shadcn code. Round and round. Your product&#8217;s look is being decided by what a language model saw the most during training, not by your brand team or your designers. Not by anyone who&#8217;s talked to your users. Super inspiring stuff.</p>



<p>If your product looks identical to your competitor&#8217;s product, what exactly is your differentiator? UI and UX used to be a competitive advantage. The way your app felt was part of why people chose it. When every SaaS dashboard looks like it came from the same AI prompt, that advantage disappears. Your product becomes a commodity before you even ship it.</p>



<p>shadcn released <a href="https://dev.to/vansh-codes/youre-using-shadcn-wrong-heres-the-right-way-to-customize-it-3656" target="_blank" rel="noreferrer noopener nofollow">theming presets</a> to address this. But swapping a color palette isn&#8217;t brand identity. It&#8217;s a skin. Your UI is part of your brand, and your UX is how people experience it. If your application looks and feels like every other AI-generated output, you&#8217;ve handed both to a default setting. Honestly, from a design standpoint, it feels lazy. We used to obsess over the details that made a product feel like <em>ours</em>. Now we&#8217;re accepting whatever the AI spits out because it looks clean enough. And it&#8217;s hard to even raise the concern because the AI made it look so good out of the box that questioning it feels like you&#8217;re the one slowing things down.</p>



<h2 class="wp-block-heading">Choosing what&#8217;s right vs. what&#8217;s common</h2>



<p>This is the part that gets lost. People see a polished AI-generated UI and assume the tool made a good decision. It didn&#8217;t decide at all. It predicted the most likely output based on its training data. shadcn and Tailwind are everywhere in that training data, so that&#8217;s what comes out. It&#8217;s not a recommendation. It&#8217;s a statistical echo.</p>



<p>But people are treating it like a recommendation. A manager vibe-codes a dashboard and thinks, &#8220;this is the direction.&#8221; A designer builds a prototype and assumes the stack is solid because the output looks professional. Nobody questions whether React is the right framework, whether Tailwind is the right styling approach, or whether shadcn is the right component strategy for their product and users. The AI picked it. It looks good. Ship it.</p>



<h2 class="wp-block-heading">That&#8217;s not a design system either</h2>



<p>On top of letting AI make UI decisions, people are also calling the output a design system. It&#8217;s not.</p>



<p>AI and shadcn are giving you components. It doesn&#8217;t give you governance, contribution models, versioning strategy, token architecture, or cross-team documentation. It doesn&#8217;t give you a shared language between design and engineering. What it gives you is a folder full of React files you copied into your repo.</p>



<p>A design system is an organizational tool. The hard part is never &#8220;make a button look nice.&#8221; The hard part is making sure multiple teams use the same button the same way, that it evolves without breaking things, and that someone owns the decision about what &#8220;primary&#8221; means across your entire product suite. shadcn doesn&#8217;t try to solve that. It&#8217;s not designed to. And that&#8217;s fine for what it is. But calling it a design system? That&#8217;s like calling a pile of lumber a house.</p>



<h2 class="wp-block-heading">215 lines of React for a radio button</h2>



<p>I get why shadcn exists. Building accessible UI primitives from scratch is hard. Getting focus management, keyboard navigation, and screen reader support right takes real expertise. That&#8217;s the problem shadcn and Radix are trying to solve, and it&#8217;s a legitimate one. But the solution has costs that nobody&#8217;s weighing because the AI never brings them up.</p>



<p><a href="https://paulmakeswebsites.com/" target="_blank" rel="noreferrer noopener">Paul Hebert</a> recently <a href="https://paulmakeswebsites.com/writing/shadcn-radio-button/" target="_blank" rel="noreferrer noopener">tore down</a> the shadcn radio button component. 215 lines of React. Seven imported files. 30 Tailwind classes. All to recreate something HTML has done natively for 30 years.</p>



<p>Instead of using <code>&lt;input type="radio"&gt;</code>, shadcn renders a <code>&lt;button&gt;</code> with an SVG circle inside it, then uses ARIA attributes to tell screen readers it&#8217;s actually a radio button. Read that again. It&#8217;s a button pretending to be a radio button and relying on ARIA to cover for the fact that it didn&#8217;t just use the native element. The browser already solved this. Decades ago. But the AI doesn&#8217;t know that, and nobody asked.</p>



<p>Zoom out, and the abstraction stack is wild. Tailwind abstracts CSS. shadcn abstracts Radix. Radix abstracts the DOM. React abstracts the DOM. Four layers between your user and a checkbox. The AI chose every single one of those layers for you. It didn&#8217;t ask whether your team needs that complexity or whether a simpler approach would work. At enterprise scale, every one of those layers is a maintenance surface and a possible debugging headache. This is complexity cosplaying as simplicity.</p>



<p>There&#8217;s another cost nobody seems to talk about. Teams using Tailwind for long enough <a href="https://www.aleksandrhovhannisyan.com/blog/why-i-dont-like-tailwind-css/" target="_blank" rel="noreferrer noopener">start forgetting actual CSS</a>. The muscle memory goes away. When something breaks outside the utility class catalog, nobody knows how to fix it. You&#8217;ve traded foundational knowledge for convenience, and that trade gets expensive when things go sideways. The AI forgot to mention that part when it generated the code.</p>



<h2 class="wp-block-heading">You&#8217;re locked in now</h2>



<p>shadcn is React. Community ports exist for Vue and Svelte, but they&#8217;re unofficial, maintained by different people, with different APIs and different release timelines. If your enterprise has teams on Vue, Angular, Svelte, Rails, or plain JS, you&#8217;re now maintaining parallel component implementations or forcing everyone onto React whether they chose it or not.</p>



<p>Here&#8217;s what nobody notices: the AI chose React. Not your team. The prototype worked, it looked great, and it naturally became the starting point for real production code. Nobody went back to reconsider the foundation. Why would they? It worked in the demo.</p>



<p>Enterprise environments are messy. Legacy apps, acquired products on different stacks, internal tools built by teams who picked their own framework years ago. A design system needs to serve all of them. shadcn can&#8217;t.</p>



<p>What happens when React isn&#8217;t the dominant framework anymore? It will happen. jQuery was untouchable once. So was Bootstrap. If your UI is coupled to React&#8217;s ecosystem and release cycle, you&#8217;ve made a bet on one framework&#8217;s future. That bet has an expiration date.</p>



<h2 class="wp-block-heading">The maintenance and governance</h2>



<p>When you use shadcn, you copy source code into your repo. You now own every component. Bug fixes, accessibility patches, and breaking changes upstream are all your problem now. There&#8217;s no upgrade path. When shadcn updates, you manually diff and merge. Across how many apps? How many teams?</p>



<p>Tailwind without strict governance <a href="https://evilmartians.com/chronicles/5-best-practices-for-preventing-chaos-in-tailwind-css" target="_blank" rel="noreferrer noopener nofollow">turns into class soup fast</a>. Every team invents its own patterns. One team uses <code>px-4 py-2</code> button padding, another uses <code>p-3</code>, a third wraps it in a custom class. Multiply that across hundreds of components and a dozen teams. Good luck with consistency.</p>



<p>Without shared tokens, &#8220;our blue&#8221; is three different hex values in three different repos. Without versioning, a component change in one app silently breaks patterns in another. Without contribution models, nobody knows who owns what or where the source of truth lives. That&#8217;s the work that goes into building a real system. None of it comes in the box with shadcn.</p>



<p>Nobody budgeted for any of this, by the way. Because nobody planned to adopt shadcn as the foundation. An AI picked it, a prototype got everyone excited, and now multiple teams are building on a decision that was never actually made.</p>



<h2 class="wp-block-heading">So who&#8217;s actually deciding what our products look like?</h2>



<p>That&#8217;s the question I keep coming back to. Vibe coding makes it easy for anyone to build something that looks production-ready, and that&#8217;s exciting. But &#8220;looks production-ready&#8221; and &#8220;is the right UI for your product&#8221; are not the same thing. Right now, AI is paving over that gap with polished output that feels like a decision was made when it wasn&#8217;t.</p>



<p>AI is great at generating code. It is not great at understanding your product, your users, or how your org works. It optimizes for &#8220;most common,&#8221; not &#8220;most appropriate.&#8221; And it will never tell you &#8220;yo, this might not be the right approach for what you&#8217;re building.&#8221; It just gives you shadcn and moves on.</p>



<p>Your UI, your UX, and your design system are business decisions. They touch brand, velocity, how teams work together, and how much technical debt gets carried for years. AI didn&#8217;t think about any of that when it decided what your product should look like.</p>



<p>I&#8217;m curious what other folks are seeing. Are we just accepting whatever UI the AI gives us now? Should vibe-coded prototypes quietly become our production UI? I might be wrong about some of this. I&#8217;d love to hear if others are seeing this and how they are navigating it.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/ai-chose-your-ui-did-it-choose-wrong/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10695</post-id>	</item>
		<item>
		<title>npm uninstall humans</title>
		<link>https://pixelflips.com/blog/npm-uninstall-humans</link>
					<comments>https://pixelflips.com/blog/npm-uninstall-humans#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Sun, 01 Mar 2026 04:56:05 +0000</pubDate>
				<category><![CDATA[Design & Dev]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10654</guid>

					<description><![CDATA[In software, a dependency is a risk you accept. It&#8217;s a package you didn&#8217;t write, maintained by someone you&#8217;ve never met, that can break your entire application if it disappears. Good engineering means knowing which dependencies are worth the risk and which ones aren&#8217;t. Right now, I&#8217;m watching our industry decide that people aren&#8217;t worth [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>In software, a dependency is a risk you accept. It&#8217;s a package you didn&#8217;t write, maintained by someone you&#8217;ve never met, that can break your entire application if it disappears. Good engineering means knowing which dependencies are worth the risk and which ones aren&#8217;t. Right now, I&#8217;m watching our industry decide that people aren&#8217;t worth it. It feels like the teams that build things are just another package to be evaluated, flagged, or uninstalled.</p>



<span id="more-10654"></span>



<p>This week, Jack Dorsey&#8217;s Block laid off <a href="https://apnews.com/article/block-dorsey-layoffs-ai-jobs-18e00a0b278977b0a87893f55e3db7bb">more than 4,000</a> people. Nearly half the company is gone. The stock jumped 24%. Dorsey told the remaining employees that most companies would reach the same conclusion within a year and make similar structural changes. Before that, Amazon <a href="https://www.bbc.com/news/articles/cx2ywzxlxnlo">cut 16,000</a> corporate roles. Chegg <a href="https://www.cnbc.com/2025/10/27/chegg-slashes-45percent-of-workforce-blames-new-realities-of-ai.html" target="_blank" rel="noreferrer noopener nofollow">gutted 45%</a> of its staff. HP announced plans to cut up to <a href="https://www.theguardian.com/business/2025/nov/26/computer-maker-hp-to-cut-up-to-6000-jobs-by-2028-as-it-turns-more-to-ai" target="_blank" rel="noreferrer noopener nofollow">6,000 positions</a>. Every single one of these moves was framed as an &#8220;AI efficiency&#8221; play. Significantly smaller teams. Leaner operations. The future.</p>



<p>I&#8217;ve been building for the web for a long time. Long enough to have seen multiple cycles of &#8220;this technology will change everything.&#8221; And some of them did. But I&#8217;ve never seen one where the pitch is this blunt: we are removing people and replacing them with software, and we think you should be excited about it. As someone who&#8217;s in the codebase <em>every day</em>, who has worked alongside people doing real, solid work only to watch them get let go, this one hits different. And if you&#8217;re a web worker right now, I know you&#8217;re feeling it too.</p>



<h2 class="wp-block-heading">Refactoring the Workforce</h2>



<p>The language around these cuts is telling. &#8220;Significantly smaller teams.&#8221; &#8220;Restructuring for AI.&#8221; It&#8217;s the vocabulary of refactoring, applied to people. And it reveals something ugly about how the people making these decisions think about the folks doing the work: we&#8217;re not assets to lead, mentor, or invest in. We&#8217;re bottlenecks to be optimized away at the first opportunity. Honestly, it&#8217;s gross.</p>



<p>But here&#8217;s what that framing misses entirely: the people and their teams are the platforms. The companies, codebases, and products are just the artifacts. The real platform, the thing that makes a product actually work, is the people who understand it. They hold the context, and these days, context is everything. They carry the intent. They know why a decision was made, not just what the decision was. Strip them out, and you don&#8217;t have a leaner company. You have a company, product, or codebase with no one left who understands it.</p>



<p>When you move the people who help build your product away from the foundational work, that&#8217;s not optimization. That&#8217;s a platform failure. You&#8217;re not reducing friction. You&#8217;re removing the people who understood where the friction came from in the first place.</p>



<p>This is the dependency trap. Someone looked at an org chart the way an engineer looks at a bloated <code>package.json</code> file and decided to rip things out. But people aren&#8217;t packages. You can&#8217;t just swap in an AI module and expect the same output, no matter how good the LLMs are getting.</p>



<h2 class="wp-block-heading">Frictionless Fantasy</h2>



<p>There&#8217;s this fantasy going around right now: the frictionless company. Small teams, AI-augmented, shipping at scale with minimal overhead. It sounds clean on a slide deck. In practice, it ignores everything that actually makes a product work.</p>



<p>Institutional knowledge. The connective tissue of company culture. The engineer who remembers why that weird edge case exists, or why the team decided against a particular API pattern three years ago. The person who knows that a feature looks simple but took six iterations to get right. The colleague who mentored you when you were new and didn&#8217;t know what you didn&#8217;t know. You can&#8217;t run <code>npm install</code> for that knowledge. You can&#8217;t prompt your way into it. It lives in the people who showed up every day and did the work, and when they are replaced with a fancy autocomplete, it&#8217;s gone.</p>



<p>When you treat people as interchangeable npm packages, you lose the thing that made them valuable: <strong>they were the platform</strong>. They weren&#8217;t just writing code or building a product. They were the reason any of it made a damn bit of sense. They carried context that no onboarding doc or AI summary can replicate. You wouldn&#8217;t rip a critical package out of production without understanding what depends on it. But that&#8217;s exactly what&#8217;s happening in this new AI era. And stripping them out in the name of efficiency creates a different kind of debt. Call it human debt. It compounds the same way technical debt does, quietly and then all at once, except nobody&#8217;s tracking it on a Linear board.</p>



<h2 class="wp-block-heading">Intent vs. Output</h2>



<p>AI is really damn good at generating output. Code, copy, designs. I use it every day, and I&#8217;m not going to pretend otherwise. But output without intent is just loud noise.</p>



<p>AI doesn&#8217;t fight for accessibility or correct implementation. It doesn&#8217;t push back on a PM about a dark pattern. It doesn&#8217;t sit in a meeting and say &#8220;this is going to hurt our users&#8221; when everyone else is nodding along. It does what it&#8217;s told. That&#8217;s useful, but it&#8217;s not stewardship.</p>



<p>Stewardship is the people who see a feature request and think about the user on the other end of it. The folks who build with empathy baked in, not because a linter told them to, but because they&#8217;ve been shipping long enough to understand what&#8217;s at stake. A product&#8217;s soul doesn&#8217;t live in the code. It lives within the people who care enough to fight for it. That intent is the human dependency that holds everything together. It&#8217;s the package you can&#8217;t find on any open source registry or have a robot generate from a poorly written prompt. And it&#8217;s the first thing that disappears when you &#8220;refactor&#8221; a team down to a skeleton crew.</p>



<h2 class="wp-block-heading">The Communication Vacuum</h2>



<p>What makes all of this worse is how it&#8217;s communicated. Or rather, how it isn&#8217;t.</p>



<p>There was a time when being a leader and leadership meant advocating for the people doing the work. Now it feels like leadership means only auditing them. Reviewing the roster the way you&#8217;d review a <code>package-lock.json</code>. What can we flag? What can we drop? What&#8217;s the minimum viable team we can get away with? Somewhere along the way, leading people turned into looking down and deciding who&#8217;s worth keeping.</p>



<p><strong>4,000 people lose their jobs</strong>, and the stock jumps 24%. Record profits in the same quarter you eliminate half your workforce. And the message to those still at their desks? &#8220;We&#8217;re moving faster now.&#8221; No acknowledgment of what was lost. You just forced to disagree and commit. Oh yeah, and btw, you need to hurry up.</p>



<p>The anxiety this is creating in our industry goes beyond job security. It&#8217;s the slow realization that the people making these decisions don&#8217;t see us as people. They see us as a third-party dependency to be evaluated and potentially removed. That changes how you show up every day. It changes whether you fight for the thing that needs fighting for, or keep your head down because you don&#8217;t want to be the next module that gets uninstalled. And honestly? That is corrosive to us all. It eats at the exact qualities that made you good at your job in the first place.</p>



<h2 class="wp-block-heading">Human Dependency</h2>



<p>I&#8217;m not anti-AI. I think these tools are powerful, and I use them to do better work. But there&#8217;s a real difference between using AI to help people improve or perform better and using AI as a justification to remove them entirely. It&#8217;s a scapegoat and helps those at the top dodge any real accountability.</p>



<p>The human dependency isn&#8217;t a bug. It&#8217;s a feature and the entire point. We build products for people, and it takes people to know what that means. The judgment to get it right and the empathy to care whether we do: that comes from being human. From the ones in the trenches. The ones who are actually the entire platform.</p>



<p>So the next time someone frames a layoff as &#8220;AI efficiency,&#8221; ask yourself what&#8217;s actually being optimized away. Because it&#8217;s probably not friction. It&#8217;s most likely the people who actually gave a damn.</p>



<p>If you&#8217;re reading this and feeling the weight of it, just know you&#8217;re not alone. A lot of us are sitting with this same knot in our stomachs, watching the industry we love and we&#8217;ve helped build move in a direction that doesn&#8217;t seem to love us back. It leaves a hole that no robot will be able to fill, and that&#8217;s a damn shame.</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/npm-uninstall-humans/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10654</post-id>	</item>
		<item>
		<title>You&#8217;re Automating the Wrong 70%</title>
		<link>https://pixelflips.com/blog/youre-automating-the-wrong-70</link>
					<comments>https://pixelflips.com/blog/youre-automating-the-wrong-70#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Sat, 28 Feb 2026 23:04:53 +0000</pubDate>
				<category><![CDATA[Design Systems]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10645</guid>

					<description><![CDATA[I came across a Medium post titled &#8220;AI Will Replace 70% of Design System Work.&#8221; The premise is that most design system work, documentation, component building, token management, accessibility audits, is &#8220;structurally automatable,&#8221; and that the real value lies in governance. The author argues that teams need to move &#8220;upward from execution to orchestration&#8221; or [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I came across a <a href="https://medium.com/@layerone_11803/ai-will-replace-70-of-design-system-work-e1efb7238cef" target="_blank" rel="noreferrer noopener nofollow">Medium post</a> titled &#8220;AI Will Replace 70% of Design System Work.&#8221; The premise is that most design system work, documentation, component building, token management, accessibility audits, is &#8220;structurally automatable,&#8221; and that the real value lies in governance. The author argues that teams need to move &#8220;upward from execution to orchestration&#8221; or risk becoming obsolete.</p>



<span id="more-10645"></span>



<p>Some of that is true. AI can help generate docs. It can lint tokens. It can draft release notes. I use AI tooling in my own design system work, and it&#8217;s useful for certain tasks. But the article&#8217;s argument falls apart because it misunderstands what design system teams actually do.</p>



<h2 class="wp-block-heading">The 70% number is made up</h2>



<p>Let&#8217;s start with the headline claim. 70% of design system work is replaceable by AI. Where does that number come from? The article never says. There&#8217;s no methodology, no survey, no data. It&#8217;s a confident assertion dressed up as a finding. That&#8217;s not analysis. That&#8217;s vibes.</p>



<p>And it matters because numbers like that end up in executive slide decks. They get used to justify headcount decisions. When you throw around &#8220;70%&#8221; with no backing, you&#8217;re not starting a conversation; you&#8217;re handing leadership a reason to cut your team.</p>



<h2 class="wp-block-heading">The whole middle is missing</h2>



<p>The article describes two modes of design system work: production (automatable) and governance (not automatable). That&#8217;s a clean framework, but it leaves out where most of the actual work happens.</p>



<p>Support. Maintenance. The daily human work of keeping a system alive and useful.</p>



<p>Answering questions in Slack. Pairing with a product engineer who&#8217;s trying to use your component in a context you didn&#8217;t anticipate. Triaging a bug that only surfaces in one team&#8217;s specific tech-stack setup. Writing a migration guide that accounts for six different integration patterns across your org. Helping a designer understand why the system works a certain way so they can make better decisions in their product.</p>



<p>This is humans supporting humans. It&#8217;s the thing that actually drives adoption, and it doesn&#8217;t show up in the article at all. You can&#8217;t automate a relationship. You can&#8217;t deploy a governance framework and expect people to trust your system. Trust is built through responsiveness. When someone files an issue and gets a thoughtful reply the same day, that&#8217;s what earns buy-in. No contribution policy document does that.</p>



<h2 class="wp-block-heading">Fast and wrong is expensive</h2>



<p>The article mentions that AI-generated output &#8220;was not perfect, but it was fast.&#8221; That sentence is doing a lot of heavy lifting. Fast and wrong is expensive. Someone still needs to review every AI-generated component API and token structure, and that someone needs deep expertise to know what &#8220;right&#8221; looks like.</p>



<p>You&#8217;re not eliminating the expert. You&#8217;re just changing what they do with their hands. The review work that remains requires the same knowledge, maybe more, because now you&#8217;re also debugging AI&#8217;s confident mistakes alongside your own.</p>



<h2 class="wp-block-heading">The feedback loop doesn&#8217;t automate</h2>



<p>Design systems are living things. You ship a component, teams adopt it, they use it in ways you didn&#8217;t expect, they file bugs, they request variants, you learn from that and iterate. That feedback loop is the engine of a healthy system.</p>



<p>AI doesn&#8217;t have relationships with your consumers. It doesn&#8217;t know that one team keeps misusing your modal because their product has a weird user flow. It doesn&#8217;t pick up on the pattern that three different teams have asked for the same thing in three different ways. That&#8217;s institutional knowledge built through support work, through being present, through paying attention to how people actually use what you&#8217;ve built.</p>



<h2 class="wp-block-heading">Governance without execution is just meetings</h2>



<p>The article elevates governance as the strategic, non-automatable layer. Governance matters, I&#8217;m not arguing that it doesn&#8217;t. But governance without strong execution underneath it is just Confluence pages and meeting invites. You earn the right to govern through the quality and reliability of what you ship. The components have to work. The documentation has to be accurate. The releases have to not break things. That foundation isn&#8217;t maintenance you automate away. It&#8217;s the thing that makes governance credible.</p>



<h2 class="wp-block-heading">The real risk</h2>



<p>The article says the true risk is that &#8220;some design systems never evolved beyond being structured UI libraries.&#8221; I&#8217;d argue the bigger risk is articles like this giving leadership permission to gut the teams that make systems work. If a VP reads &#8220;AI will replace 70% of this function&#8221; and takes it at face value, the people who get cut aren&#8217;t the governance strategists. It&#8217;s the engineers and designers who do the daily work of building, supporting, and maintaining the system.</p>



<p>Design systems aren&#8217;t factories, and they aren&#8217;t governance frameworks. They&#8217;re a service. The value lives in the ongoing, responsive, human relationship between the system team and the people who depend on it. AI is a useful tool in that work. It&#8217;s not a replacement for the people doing it.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/youre-automating-the-wrong-70/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10645</post-id>	</item>
		<item>
		<title>Your Design System&#8217;s Got Skills?</title>
		<link>https://pixelflips.com/blog/your-design-systems-got-skills</link>
					<comments>https://pixelflips.com/blog/your-design-systems-got-skills#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Sat, 21 Feb 2026 20:49:48 +0000</pubDate>
				<category><![CDATA[Design Systems]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10579</guid>

					<description><![CDATA[I&#8217;ve been tinkering with something lately that I think more design system teams should pay attention to. I&#8217;m trying to teach AI how my design system works through Claude skills and Cursor rules. The problem AI has with your design system If you&#8217;ve used any AI coding tool to generate UI, you&#8217;ve seen this. You [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I&#8217;ve been tinkering with something lately that I think more design system teams should pay attention to. I&#8217;m trying to teach AI how my design system works through Claude skills and Cursor rules.</p>



<span id="more-10579"></span>



<h2 class="wp-block-heading">The problem AI has with your design system</h2>



<p>If you&#8217;ve used any AI coding tool to generate UI, you&#8217;ve seen this. You ask for a card component, and it gives you something that looks right but uses none of your tokens, references components that don&#8217;t exist in your library, and ignores every pattern your team spent months establishing. The output is plausible. It&#8217;s also wrong in ways that are tedious to untangle.</p>



<p>Your design system has opinions. It has a token taxonomy, component APIs, composition patterns, and a11y requirements. AI doesn&#8217;t know any of that. It&#8217;s working from general training data, not your system. So it gives you generic Bootstrap-flavored markup and leaves you to clean it up.</p>



<p>That cleanup time adds up fast, especially across a team.</p>



<h2 class="wp-block-heading">Skills and rules: the 80/20 play</h2>



<p>MCP servers for design systems are getting well-deserved attention right now and rightfully so. It&#8217;s actually where I spent time early on building and setting up to help provide context for the design system I work on. MCP is powerful, and it lets AI tools query your system&#8217;s components, tokens, and docs through a structured protocol. But it requires infrastructure. You need to build and run a server.</p>



<p><a href="https://code.claude.com/docs/en/skills" target="_blank" rel="noreferrer noopener nofollow">Claude skills</a> and <a href="https://cursor.com/docs/context/rules" target="_blank" rel="noreferrer noopener nofollow">Cursor rules</a> are the simpler path. A skill is a markdown file with instructions, reference material, and optional scripts. A Cursor rule is very similar, a text file that gives the AI context about your project&#8217;s conventions. No server, no API. Just documents that describe how your system works, loaded into context when relevant.</p>



<p>I&#8217;ve started investigating and building a couple of these for my own work, and the payoff has been immediate. Instead of fixing AI output that ignores my component library, the AI starts from the right place. It knows which components exist and which tokens to use by either guiding it to the proper files or providing an explanation right inside the file itself. I am still in early trials myself, and it&#8217;s not perfect, but the gap between what it generates and what I&#8217;d actually ship has gotten a lot smaller.</p>



<h2 class="wp-block-heading">Types of skills your DS team could build</h2>



<p>Out of the gate, I have been trying to come up with and define a few skills that I think could be useful.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th><strong>Skills for a DS Team</strong></th></tr></thead><tbody><tr><td><strong>Component usage</strong><br>Which component to use and when. Props, slots, do/don&#8217;t patterns</td></tr><tr><td><strong>Token references</strong><br>Semantic vs primitive. Theming logic. Mult-brand mappings.</td></tr><tr><td><strong>Accessibility rules</strong><br>Aria, focus management, contrast. Catch at source, not in review.</td></tr><tr><td><strong>Migration/upgrades</strong><br>Map old APIs to new. Developer guidance from legacy to current.</td></tr><tr><td><strong>Documentation<br></strong>Standards for component docs. Contribution guidelines and requirements.</td></tr><tr><td><strong>Code patterns</strong><br>File structure, component naming, or any shared knowledge.</td></tr></tbody></table></figure>



<p>A component usage skill is probably the highest-value starting point. Which component to reach for in common scenarios, correct props, slot patterns, and do/don&#8217;t examples from your docs. This alone changes the quality of what AI generates against your system.</p>



<p>A token reference seems like the natural companion. Your full taxonomy with guidance on when to use semantic vs. primitive tokens. If your system supports theming or multiple brands, this is where you encode that logic, so AI stops guessing.</p>



<p>An accessibility skill bakes your a11y standards into generation. ARIA patterns, keyboard navigation, focus management, contrast expectations. Catch it at the source instead of flagging it in review.</p>



<p>If you&#8217;ve got a legacy system and a current one (I&#8217;m dealing with this right now), a migration skill that knows both APIs and can map between them saves real time. A documentation skill keeps contributions consistent without a style guide that nobody reads. A code patterns skill captures your team&#8217;s file structure and naming conventions, the tribal knowledge that usually lives in someone&#8217;s head.</p>



<h2 class="wp-block-heading">Project skills vs. personal skills</h2>



<p>Not every skill needs to live in the repo. This is something I&#8217;ve been thinking about as I work on more of these.</p>



<p>Some skills belong at the project level, committed to the repo where the whole team benefits. Component usage, token reference, and a11y rules. These are the source-of-truth skills that keep everyone generating consistent, on-system code. When a new developer joins the team and starts using AI from day one, the AI already knows how your team works.</p>



<p>Other skills are just for you. Maybe you&#8217;ve got one for how you scaffold new components, or one that matches your PR description format, or shortcuts for tasks you repeat often, like version bump PRs. These can live in your user-level config and make you faster without imposing your preferences on anyone else.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th><strong>PROJECT SKILLS</strong></th><th><strong>PERSONAL SKILLS</strong></th></tr></thead><tbody><tr><td><strong>Component usage</strong><br>Which component, props, patterns</td><td><strong>Component scaffolding</strong><br>Your preferred file structure</td></tr><tr><td><strong>Token references</strong><br>Taxonomy, semantic vs primitives</td><td><strong>Commits &amp; PR descriptions<br></strong>Your format and conventions</td></tr><tr><td><strong>Accessibility rules</strong><br>Aria, focus, contrast guidelines</td><td><strong>Daily shortcuts<br></strong>Repeated tasks, personal workflows</td></tr><tr><td><strong>Migration guides</strong><br>Legacy to new API mappings</td><td><strong>Code review helpers</strong><br>Your review checklist and style</td></tr></tbody></table></figure>



<p>The split matters. Project skills enforce consistency. Personal skills are about your own speed. Mixing them up creates friction, and nobody wants to inherit someone else&#8217;s personal workflow baked into the repo.</p>



<h2 class="wp-block-heading">Why this matters now</h2>



<p>Developers are already using AI to write code against your design system. That&#8217;s happening whether you&#8217;ve prepared for it or not. The question is whether the AI knows your system or is guessing.</p>



<p>I <a href="https://pixelflips.com/blog/the-ai-productivity-paradox" data-type="link" data-id="https://pixelflips.com/blog/the-ai-productivity-paradox">wrote recently</a> about the AI productivity paradox, how AI often increases workload instead of decreasing it because you spend so much time reviewing and fixing output. Skills and rules attack that problem at the source. Give AI the right context upfront, and there&#8217;s less to fix on the other end.</p>



<p>We spent years meeting designers in Figma with component libraries. We met developers in their repos with npm packages. Now we need to meet them inside their AI tools. Skills and rules are how you do that without a big infrastructure investment.</p>



<h2 class="wp-block-heading">What are you building?</h2>



<p>I&#8217;m still early in this. I&#8217;ve got a few skills running that have already changed how I work, but I know there&#8217;s a lot more to figure out. Have you built Claude skills or Cursor rules for your design system? Found an approach that works well?</p>



<p>I&#8217;d love to hear what people are doing. If you&#8217;ve got something working, tell me about it.</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/your-design-systems-got-skills/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10579</post-id>	</item>
		<item>
		<title>Let Me Reintroduce Myself</title>
		<link>https://pixelflips.com/blog/let-me-reintroduce-myself</link>
					<comments>https://pixelflips.com/blog/let-me-reintroduce-myself#respond</comments>
		
		<dc:creator><![CDATA[Phillip Lovelace]]></dc:creator>
		<pubDate>Sat, 21 Feb 2026 00:51:34 +0000</pubDate>
				<category><![CDATA[Personal]]></category>
		<guid isPermaLink="false">https://pixelflips.com/?p=10596</guid>

					<description><![CDATA[I came across a post by Cassidy Williams a few weeks ago about LLM discoverability. The gist: she asked ChatGPT some tech discovery questions, noticed it didn&#8217;t recommend her, then asked why. The AI gave her a list of things to fix. She fixed them. It worked. I read that and immediately thought: what happens [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I came across a post by Cassidy Williams a few weeks ago about <a href="https://cassidoo.co/post/ai-llm-discoverability/" target="_blank" rel="noreferrer noopener">LLM discoverability</a>. The gist: she asked ChatGPT some tech discovery questions, noticed it didn&#8217;t recommend her, then asked why. The AI gave her a list of things to fix. She fixed them. It worked.</p>



<span id="more-10596"></span>



<p>I read that and immediately thought: what happens if I try this?</p>



<h2 class="wp-block-heading">What Cassidy did</h2>



<p>Her experiment was simple. Ask LLMs questions that should surface her work, see if they mention her, and if they don&#8217;t, ask them what she could do about it. The AI came back with practical stuff: create an llms.txt file, add structured data, keep messaging consistent across platforms. She implemented the suggestions over a few weeks and started showing up in LLM responses.</p>



<p>What stuck with me was less the technical recommendations and more the realization that people are using AI to discover people and tools now. Not just Google. If an LLM doesn&#8217;t know you exist, a lot of people won&#8217;t find you either.</p>



<h2 class="wp-block-heading">So I tried it</h2>



<p>I asked a few LLMs about me. About my work. Design systems people to follow, UX developers worth knowing, folks writing about the intersection of design and engineering. The results were humbling. Mostly blank. A couple got my name right but didn&#8217;t have much to say beyond that.</p>



<p>So I did what Cassidy did. I asked the AI: what should I do to show up more? What would make you recommend me?</p>



<p>The answer that kept coming back, across every model I tried: write more. Publish more. Put your thinking out in the open. The models learn from what&#8217;s publicly available, and if you&#8217;re not publishing, you&#8217;re invisible to them.</p>



<p>There&#8217;s something kind of absurd about an AI telling you to create more content so it can learn who you are. Like a robot tapping you on the shoulder and saying, &#8220;hey, I&#8217;d recommend you to people, but I don&#8217;t have enough to go on. Help me out here.&#8221;</p>



<h2 class="wp-block-heading">So here we go</h2>



<p>I&#8217;m taking the advice. Not entirely because an AI told me to. I&#8217;ve been meaning to write more for a while now. I&#8217;ve got opinions about design systems and AI tooling and the weird space between design and engineering that I&#8217;ve lived in for years. I just haven&#8217;t been putting them out there consistently.</p>



<p>So here I am, writing. The robots demanded it, and honestly, they&#8217;re not wrong.</p>



<p>Let&#8217;s see if they notice.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://pixelflips.com/blog/let-me-reintroduce-myself/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10596</post-id>	</item>
	</channel>
</rss>
