<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>Hongkiat</title>
	<atom:link href="https://www.hongkiat.com/blog/feed/" rel="self" type="application/rss+xml"/>
	<link>https://www.hongkiat.com/blog/</link>
	<description>Tech and Design Tips</description>
	<lastBuildDate>Mon, 06 Apr 2026 11:19:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">1070734</site>	<xhtml:meta content="noindex" name="robots" xmlns:xhtml="http://www.w3.org/1999/xhtml"/><item>
		<title>How to Connect Hermes to Telegram</title>
		<link>https://www.hongkiat.com/blog/connect-hermes-to-telegram-2/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 15:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74278</guid>

					<description><![CDATA[<p>A practical step-by-step guide to connecting Hermes to Telegram, from bot creation to config, restart, and troubleshooting.</p>
<p>The post <a href="https://www.hongkiat.com/blog/connect-hermes-to-telegram-2/">How to Connect Hermes to Telegram</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>One of the nicest things about <a href="https://www.hongkiat.com/blog/run-offline-chat-assistant/">running a local AI agent</a> is that it does not have to stay trapped on the machine where it started.</p>
<p>Once I connected <a rel="nofollow noopener" target="_blank" href="https://hermesagent.ai/">Hermes</a> to Telegram, it became much more useful. I could message it from anywhere, test workflows quickly, and treat it less like a terminal-bound tool and more like an assistant I could actually reach when I needed it.</p>
<p>The setup is not hard, but a few details matter. The main one is this: Hermes splits credentials and behavior into different places. Your secrets go in <code>.env</code>. Your Telegram behavior lives in <code>config.yaml</code>. Put those in the wrong place and you will waste time for no good reason.</p>
<p>Here’s the clean way to set it up.</p>
<h2 id="before-you-start">What You Need Before You Start</h2>
<p>Before touching any config files, make sure you already have:</p>
<ul>
<li>Hermes installed on your machine</li>
<li>a Telegram account</li>
<li>a Telegram bot token, which you will create in a moment</li>
</ul>
<h2 id="create-a-bot">Create a Telegram Bot</h2>
<p>Open Telegram and search for <strong>@BotFather</strong>.</p>
<p>Start a chat and run:</p>
<pre>/newbot</pre>
<figure><img fetchpriority="high" decoding="async" src="https://assets.hongkiat.com/uploads/connect-hermes-to-telegram/telegram-newbot.jpg" width="2266" height="1444" alt="telegram newbot"></figure>
<p>BotFather will walk you through the rest. You give the bot a name, then a username, and in return Telegram gives you a bot token that looks something like this:</p>
<pre>123456789:ABCdefGhIJKlmNoPQRsTUVwxyZ</pre>
<p>Save that token somewhere safe. You will need it shortly, and you should not paste it around carelessly.</p>
<h2 id="find-chat-id">Find Your Telegram Chat ID</h2>
<p>Next, search for <strong>@userinfobot</strong> in Telegram and start a conversation with it.</p>
<p>It replies with your numeric Telegram user ID. It will look something like this:</p>
<pre>123456789</pre>
<p>Save that too.</p>
<p>This is the ID Hermes uses to decide who is allowed to talk to your bot.</p>
<figure><img decoding="async" src="https://assets.hongkiat.com/uploads/connect-hermes-to-telegram/telegram-user-id.jpg" width="2266" height="1444" alt="telegram find user id"></figure>
<h2 id="put-credentials-env">Put Credentials in .env</h2>
<p>This is the part that is easy to get wrong.</p>
<p>Your Telegram bot token and allowed user list belong in:</p>
<pre>~/.hermes/.env</pre>
<p>They do <strong>not</strong> belong in <code>config.yaml</code>.</p>
<p>Open <code>~/.hermes/.env</code> and add or update these values:</p>
<pre># Telegram bot token from @BotFather
BOT_TOKEN=your_bot_token_here
TELEGRAM_BOT_TOKEN=your_bot_token_here

# Your numeric Telegram user ID from @userinfobot
# For multiple users, separate with commas
TELEGRAM_ALLOWED_USERS=123456789</pre>
<p>If you are the only one who should be able to talk to the bot, keep just your own ID there. If multiple people should have access, separate their IDs with commas.</p>
<h2 id="set-config-yaml">Set Telegram Behavior in config.yaml</h2>
<p>Once credentials are in place, configure how Hermes should behave on Telegram.</p>
<p>Open:</p>
<pre>~/.hermes/config.yaml</pre>
<p>Then add a <code>telegram:</code> section if it does not already exist:</p>
<pre>telegram:
  require_mention: true</pre>
<p>This setting controls whether Hermes only responds when explicitly mentioned.</p>
<p>With <code>require_mention: true</code>, you need to call the bot directly in the chat, usually by mentioning its username.</p>
<p>With <code>false</code>, Hermes responds to all messages in allowed chats, which is fine in some setups but not always what you want.</p>
<p>For most people, <code>true</code> is the safer default.</p>
<h2 id="add-a-prefix">Optionally Add a Prefix</h2>
<p>If you have multiple bots in the same chat, or just want cleaner control over what triggers Hermes, you can also define a prefix.</p>
<p>Example:</p>
<pre>telegram:
  require_mention: true
  prefix: "/hermes"</pre>
<p>That means users must type something like:</p>
<pre>/hermes summarize this</pre>
<p>It is optional, but useful if the bot shares a noisy group chat with other tools. If you want to go broader than Telegram later, this roundup of <a href="https://www.hongkiat.com/blog/tools-to-build-chatbots/">tools to build your own chatbots</a> is a decent next stop.</p>
<h2 id="restart-gateway">Restart the Gateway</h2>
<p>After editing <code>.env</code> and <code>config.yaml</code>, restart the Hermes gateway so it reloads the new settings:</p>
<pre>hermes gateway restart</pre>
<p>If you want to confirm it actually came back up, run:</p>
<pre>hermes gateway status</pre>
<p>That is usually enough. No drama required.</p>
<h2 id="test-the-bot">Test the Bot</h2>
<p>Now open Telegram, find your bot by its username, and send it a message.</p>
<p>Something simple is fine:</p>
<pre>Hello, are you there?</pre>
<p>If everything is configured correctly, Hermes should reply.</p>
<figure><img decoding="async" src="https://assets.hongkiat.com/uploads/connect-hermes-to-telegram/first-contact.jpg" width="2266" height="1444" alt="first contact"></figure>
<p>And that is the moment it stops feeling like a local experiment and starts feeling properly useful.</p>
<h2 id="what-goes-wrong">What Usually Goes Wrong</h2>
<p>If Hermes does not respond, these are the first things I would check.</p>
<h5 id="token-wrong">The Token Is Wrong</h5>
<p>Double-check that the bot token in <code>.env</code> is correct.</p>
<p>One bad character is enough to make the whole thing quietly fail.</p>
<h5 id="user-id-wrong">The Allowed User ID Is Wrong</h5>
<p>Make sure <code>TELEGRAM_ALLOWED_USERS</code> contains the correct numeric Telegram user ID.</p>
<p>If the ID is wrong, Hermes may be working perfectly and still ignore you.</p>
<h5 id="gateway-not-restarted">The Gateway Was Not Restarted</h5>
<p>If you edited the config but did not restart the gateway, Hermes may still be running with the old settings.</p>
<h5 id="wrong-token-loaded">The Wrong Bot Token Is Being Loaded</h5>
<p>If you manage multiple Telegram bots, make sure Hermes is reading the token you think it is reading.</p>
<p>That one gets people more often than they admit.</p>
<h5 id="network-issues">Network or Firewall Issues</h5>
<p>If outbound connections to Telegram are blocked, the bot will not behave no matter how correct your config is.</p>
<h2 id="security-note">A Quick Security Note</h2>
<p>Do not share your bot token publicly.</p>
<p>Treat it like a credential, because it is one.</p>
<p>Also, keep the <code>TELEGRAM_ALLOWED_USERS</code> list tight. That list is effectively your access control layer for Telegram chat access.</p>
<h2 id="final-thought">Final Thought</h2>
<p>Once Hermes is connected to Telegram, the whole setup feels different.</p>
<p>It is still your local agent. It is still running on your machine. But now it is reachable from anywhere, which makes it much easier to use in real life instead of only when you happen to be sitting at your desk.</p>
<p>If you set it up cleanly, the whole thing is surprisingly painless.</p><p>The post <a href="https://www.hongkiat.com/blog/connect-hermes-to-telegram-2/">How to Connect Hermes to Telegram</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74278</post-id>	</item>
		<item>
		<title>10 OpenClaw Alternatives Worth Trying</title>
		<link>https://www.hongkiat.com/blog/openclaw-alternatives/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 13:00:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74251</guid>

					<description><![CDATA[<p>These OpenClaw alternatives range from tiny local runtimes to full-featured assistants, with options for better speed, privacy, or lower resource use.</p>
<p>The post <a href="https://www.hongkiat.com/blog/openclaw-alternatives/">10 OpenClaw Alternatives Worth Trying</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The AI agent ecosystem is evolving quickly. While <a rel="nofollow noopener" target="_blank" href="https://openclaw.ai">OpenClaw</a> helped popularize accessible agent workflows, developers are now exploring tools that offer faster performance, better efficiency, and different architectural approaches. Many newer frameworks focus on lower latency and smaller resource footprints.</p>
<p>In this article, we’ll look at <strong>10 OpenClaw alternatives</strong>, from lightweight options like <a href="#picoclaw">PicoClaw</a> to more full-featured tools like <a href="#moltis">Moltis</a>, to help you choose the right fit for your next AI agent project.</p>
<p>Let’s start with <a href="#nanobot">Nanobot</a>.</p>
<h2 id="nanobot"><a rel="nofollow noopener" target="_blank" href="https://github.com/HKUDS/nanobot">Nanobot</a></h2>
<p><strong>Nanobot</strong> is a lightweight, open-source AI agent framework developed by <strong>the University of Hong Kong (HKUDS)</strong>. It is designed as a minimal alternative to OpenClaw.</p>
<p>It provides similar capabilities in only a few thousand lines of Python. That makes it easier to deploy, modify, and run on modest hardware. It also supports multiple LLM providers with built-in memory.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/nanobot.jpg" width="2000" height="1026" alt="Nanobot framework screenshot"><figcaption>Nanobot keeps things lightweight without stripping away the core agent features developers actually use.</figcaption></figure>
<h3 id="nanobot-pros">Pros</h3>
<ul>
<li><strong>Lower resource footprint</strong>: It starts in less than a second and uses very little RAM. That makes it a practical fit for a cheap VPS or an older laptop.</li>
<li><strong>Regional messaging support</strong>: Besides popular platforms like WhatsApp and Telegram, it supports Asian enterprise platforms like <strong>DingTalk</strong> and <strong>QQ</strong>.</li>
<li><strong>Built-in MCP support</strong>: It leans on MCP, so you can plug in MCP-compatible tools like Brave Search or Google Drive without extra plugins or external tooling.</li>
</ul>
<h3 id="nanobot-cons">Cons</h3>
<ul>
<li><strong>No GUI</strong>: Unlike OpenClaw, Nanobot is strictly CLI- and chat-based. If you want a point-and-click experience, this is not it.</li>
<li><strong>Fewer out-of-the-box skills</strong>: OpenClaw has ClawHub, a registry of community-made skills. With Nanobot, you’re more likely to write a small script or connect an MCP server yourself.</li>
</ul>
<p>Nanobot is a strong choice if you want a fast, <strong>hackable framework</strong> for building personal AI agents.</p>
<h2 id="nullclaw"><a rel="nofollow noopener" target="_blank" href="https://nullclaw.io/">NullClaw</a></h2>
<p><strong>NullClaw</strong> is a high-performance, open-source AI agent runtime built as a tiny single-binary tool. It is designed for efficiency, so it can run on ultra-low-end hardware with near-instant boot times while still supporting 50+ LLM providers, secure sandboxing, and modular integrations.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/nullclaw.jpg" width="2000" height="1200" alt="NullClaw interface screenshot"><figcaption>NullClaw is built for speed, tiny footprints, and hardware that would make heavier runtimes complain.</figcaption></figure>
<h3 id="nullclaw-pros">Pros</h3>
<ul>
<li><strong>Better performance</strong>: It is known for its 678 KB binary size and roughly 1 MB RAM footprint. Compare that with OpenClaw, which is built with Node.js and often needs far more memory. It also boots in under 2 milliseconds.</li>
<li><strong>Edge and IoT ready</strong>: It can run comfortably on a Raspberry Pi Zero or even a router.</li>
<li><strong>Zero dependencies</strong>: It is written in Zig and compiled to a static binary, so it does not require Python, Node.js, or another runtime on the host machine.</li>
</ul>
<h3 id="nullclaw-cons">Cons</h3>
<ul>
<li><strong>No GUI</strong>: Configuring NullClaw means editing JSON or YAML files and understanding system paths. It lacks the friendlier installer and dashboard experience that OpenClaw offers.</li>
<li><strong>Developer barrier</strong>: If you want to modify the core code, you need to know Zig, which is far less common than JavaScript or Python.</li>
</ul>
<p>If you’re looking for a lightweight, fast, and privacy-focused AI agent, NullClaw is a compelling option. It works well on a cheap VPS or embedded device where it can handle tasks quietly in the background.</p>
<h2 id="nanoclaw"><a rel="nofollow noopener" target="_blank" href="https://nanoclaw.dev/">NanoClaw</a></h2>
<p><strong>NanoClaw</strong> is a lightweight, open-source personal AI agent designed to run locally on your machine. While OpenClaw aims to be an all-in-one Swiss Army knife, NanoClaw takes a more focused, security-first approach.</p>
<p>Its minimal single-process architecture makes it easier to set up and run on low-powered devices like a <strong>Raspberry Pi</strong>. It also includes built-in support for WhatsApp, Telegram, Slack, and Gmail.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/nanoclaw.jpg" width="2000" height="1200" alt="NanoClaw dashboard view"><figcaption>NanoClaw leans into local isolation and privacy, which is a big part of its appeal.</figcaption></figure>
<h3 id="nanoclaw-pros">Pros</h3>
<ul>
<li><strong>OS-level isolation</strong>: NanoClaw runs each agent in its own Docker container or <a rel="nofollow noopener" target="_blank" href="https://github.com/apple/container?tab=readme-ov-file">Apple Container on macOS</a>. If an agent is compromised by a malicious site or prompt injection, it stays contained.</li>
<li><strong>Per-group privacy</strong>: Each WhatsApp or Telegram group gets its own isolated container, memory file, and filesystem. That reduces the risk of one chat leaking into another.</li>
<li><strong>Scheduled tasks</strong>: NanoClaw lets you schedule tasks at specific times or intervals, which is useful for reports, update checks, or cleanup jobs.</li>
</ul>
<h3 id="nanoclaw-cons">Cons</h3>
<ul>
<li><strong>Claude-locked</strong>: While OpenClaw supports many models, NanoClaw is closely tied to Anthropic. If you do not want to use Claude, this will be limiting.</li>
<li><strong>No GUI or dashboard</strong>: There is no point-and-click desktop app. Setup and management happen through the terminal and messaging apps.</li>
<li><strong>Setup complexity</strong>: It aims for low configuration, but you still need Docker or another container runtime installed and running.</li>
</ul>
<p><strong>NanoClaw</strong> makes sense if you want a secure and simple AI assistant that can handle private data safely inside isolated containers.</p>
<h2 id="picoclaw"><a rel="nofollow noopener" target="_blank" href="https://picoclaw.io/">PicoClaw</a></h2>
<p><strong>PicoClaw</strong> is an open-source AI assistant designed for very low-cost hardware. It uses minimal resources, starts quickly, and can still handle reminders, web searches, basic automation, and chat-based commands.</p>
<p>That makes it a good choice for personal projects, edge computing, or lightweight AI assistants that do not need powerful machines.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/picoclaw.jpg" width="2000" height="1200" alt="PicoClaw app screenshot"><figcaption>PicoClaw is aimed at low-cost hardware, where fast startup and low memory use matter more than polish.</figcaption></figure>
<h3 id="picoclaw-pros">Pros</h3>
<ul>
<li><strong>Asian platform support</strong>: Out of the box, it supports platforms like QQ, DingTalk, and LINE, which are often secondary in Western-focused frameworks.</li>
</ul>
<h3 id="picoclaw-cons">Cons</h3>
<ul>
<li><strong>No browser automation</strong>: OpenClaw’s biggest strength is its ability to use a computer directly. PicoClaw is a headless agent. It communicates through APIs and messages but cannot control a live browser window yet. A proposal <a rel="nofollow noopener" target="_blank" href="https://github.com/sipeed/picoclaw/issues/293">has been added</a>, but the feature is not available at the time of writing.</li>
<li><strong>No GUI</strong>: Everything is configured through a <code>config.json</code> file and used through Telegram or the terminal. If you want a graphical interface, this may be a deal breaker.</li>
</ul>
<p>If you need a lightweight AI assistant for low-cost hardware with strong Asian platform support, <strong>PicoClaw</strong> is worth a look.</p>
<h2 id="zeroclaw"><a rel="nofollow noopener" target="_blank" href="https://www.zeroclawlabs.ai">ZeroClaw</a></h2>
<p><strong>ZeroClaw</strong> is a lightweight, Rust-based AI agent runtime built for fast local execution. It needs under 5 MB of RAM and ships as a binary of roughly 3.4 MB.</p>
<p>It connects to providers like OpenAI, Anthropic, and local models. It also handles memory, tools, and workflows behind the scenes while storing data locally in SQLite, which helps keep everything private and under your control.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/zeroclaw.jpg" width="2000" height="1026" alt="ZeroClaw runtime screenshot"><figcaption>ZeroClaw focuses on local performance and broad provider support without asking for much hardware.</figcaption></figure>
<h3 id="zeroclaw-pros">Pros</h3>
<ul>
<li><strong>Lightweight</strong>: It can run on a single-core CPU while using very little RAM.</li>
<li><strong>High concurrency</strong>: ZeroClaw uses Rust’s async model to handle many simultaneous tasks with lower CPU overhead than heavier runtimes.</li>
<li><strong>Custom provider support</strong>: It supports custom providers, so you are not limited to only OpenAI or Anthropic.</li>
</ul>
<h3 id="zeroclaw-cons">Cons</h3>
<ul>
<li><strong>The Rust learning curve</strong>: Unless you already know Rust, customizing the core or writing advanced native plugins will be harder than doing the same in JavaScript or Python.</li>
<li><strong>Broad provider support can add complexity</strong>: It supports 20+ providers including <a rel="nofollow noopener" target="_blank" href="https://ollama.com">Ollama</a>, <a rel="nofollow noopener" target="_blank" href="https://www.deepseek.com">DeepSeek</a>, <a rel="nofollow noopener" target="_blank" href="https://www.moonshot.cn">Moonshot</a>, and others. That flexibility is useful, but it can also make configuration more involved.</li>
</ul>
<p>Like NullClaw and PicoClaw, ZeroClaw is a solid fit if you want an AI agent that is fast, reliable, and able to handle heavier workloads without using much memory.</p>
<h2 id="ironclaw"><a rel="nofollow noopener" target="_blank" href="https://github.com/nearai/ironclaw">IronClaw</a></h2>
<p><strong>IronClaw</strong> is a secure, open-source AI assistant you can run on your own machine. It focuses on privacy and safety by keeping workloads local and protecting sensitive information like credentials.</p>
<p>It isolates AI tasks in secure environments, monitors activity, and gives you tighter control over tool access. You can interact with it through APIs or messaging apps like Telegram and Slack, which makes it a flexible and security-focused alternative to OpenClaw.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/ironclaw.jpg" width="2000" height="1200" alt="IronClaw security interface"><figcaption>IronClaw is designed around safer execution, especially when sensitive tools or credentials are involved.</figcaption></figure>
<h3 id="ironclaw-pros">Pros</h3>
<ul>
<li><strong>WASM sandboxing</strong>: Unlike OpenClaw, which often runs tools on the host or inside general containers, IronClaw executes untrusted tools in isolated WebAssembly sandboxes.</li>
<li><strong>Credential protection</strong>: It uses a credential injection model where the AI model itself never sees your API keys. Secrets are injected only at execution time.</li>
<li><strong>Better performance</strong>: It offers a small memory footprint and quick startup times.</li>
</ul>
<h3 id="ironclaw-cons">Cons</h3>
<ul>
<li><strong>NEAR ecosystem dependency</strong>: To use many of its features, you currently need a NEAR AI account. That adds third-party authentication to the mix.</li>
<li><strong>WASM plugin limits</strong>: The sandboxed model improves safety, but it can also limit compatibility with tools that expect full native access.</li>
</ul>
<p>IronClaw is best suited to security-sensitive work, especially when the agent may handle credentials, financial data, or internal systems.</p>
<h2 id="oxibot"><a rel="nofollow noopener" target="_blank" href="https://github.com/DioCrafts/OxiBot">OxiBot</a></h2>
<p><strong>OxiBot</strong> is a lightweight AI assistant built in Rust that you can run on your own machine. It is designed to be fast, secure, and simple while using fewer system resources than larger AI tools.</p>
<p>It is effectively a port of <a rel="nofollow noopener" target="_blank" href="https://github.com/HKUDS/nanobot">Nanobot</a> with a similar configuration model and feature set, but built in Rust instead of Python. It ships as a single static binary that is easy to deploy and run.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/oxibot.jpg" width="2000" height="1200" alt="OxiBot terminal interface"><figcaption>OxiBot keeps the Nanobot-style approach, but swaps Python for a leaner Rust foundation.</figcaption></figure>
<h3 id="oxibot-pros">Pros</h3>
<ul>
<li><strong>Rust-based</strong>: Built with Rust, it offers better performance and memory efficiency than many Python-based alternatives.</li>
<li><strong>Nanobot clone</strong>: If you are already familiar with Nanobot, OxiBot will feel immediately familiar.</li>
<li><strong>Single static binary</strong>: It comes as one static binary that is easy to deploy and run.</li>
</ul>
<h3 id="oxibot-cons">Cons</h3>
<ul>
<li><strong>Development friction</strong>: Rust is powerful, but it is harder to modify and extend than the JavaScript or Python used by many other tools in this category.</li>
<li><strong>Headless by nature</strong>: It is not built for browser automation in the same way OpenClaw is. It focuses on text, tasks, and command execution rather than GUI control.</li>
</ul>
<p><strong>OxiBot</strong> is a practical choice if you want a fast local assistant with low overhead, especially if you plan to pair it with a local model like <a rel="nofollow noopener" target="_blank" href="https://ollama.com">Ollama</a>.</p>
<h2 id="maxclaw"><a rel="nofollow noopener" target="_blank" href="https://maxclaw.top">MaxClaw</a></h2>
<p><strong>MaxClaw</strong> is a local AI assistant built with Go and designed to run on your own machine with relatively low memory usage. It works as a single efficient program that handles tasks, tools, and workflows without needing cloud services.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/maxclaw.jpg" width="2000" height="1200" alt="MaxClaw desktop interface"><figcaption>MaxClaw is one of the few options here that still tries to keep a straightforward GUI in the mix.</figcaption></figure>
<h3 id="maxclaw-pros">Pros</h3>
<ul>
<li><strong>Go-based</strong>: Built with Go, it offers strong performance and memory efficiency compared with many JavaScript- or Python-based alternatives.</li>
<li><strong>GUI</strong>: It includes a simple interface for chatting, managing files, and running commands in one place.</li>
</ul>
<h3 id="maxclaw-cons">Cons</h3>
<ul>
<li><strong>Limited providers</strong>: It seems to support only OpenAI and Anthropic. That may be restrictive if you prefer providers like Ollama or DeepSeek.</li>
<li><strong>No Windows support</strong>: Both the GUI and CLI are available only on Linux and macOS.</li>
</ul>
<p><strong>MaxClaw</strong> is a good fit if you want a fast, lightweight local AI assistant with a GUI, though it is a weaker choice for Windows users.</p>
<h2 id="copaw"><a rel="nofollow noopener" target="_blank" href="https://copaw.agentscope.io">CoPaw</a></h2>
<p><strong>CoPaw</strong> is an open-source AI agent platform that lets you run and manage agents locally or in the cloud. It is easy to set up and includes a simple web interface for chatting, configuring agents, and tracking usage.</p>
<p>It supports integrations with apps like Discord and iMessage, works with local AI models for better privacy, and includes support for skills that extend its capabilities. You can get started with a one-line install script or Docker.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/copaw.jpg" width="2000" height="1200" alt="CoPaw web interface"><figcaption>CoPaw stands out for its ready-made interface and multi-platform messaging support.</figcaption></figure>
<h3 id="copaw-pros">Pros</h3>
<ul>
<li><strong>Multi-channel support</strong>: CoPaw has built-in support for iMessage, Discord, Telegram, DingTalk, Feishu, and QQ.</li>
<li><strong>Built-in MCP and skills</strong>: It includes MCP and skills out of the box, making it easier to add tools and capabilities.</li>
<li><strong>Web interface</strong>: It provides a simple interface for chatting with agents, configuring them, installing MCP, and adding providers.</li>
</ul>
<h3 id="copaw-cons">Cons</h3>
<ul>
<li><strong>Resource overhead</strong>: It needs more system resources than lightweight alternatives like PicoClaw or MaxClaw.</li>
<li><strong>Python dependencies</strong>: It relies on Python, which can offer flexibility but also adds environment and compatibility overhead.</li>
</ul>
<p><strong>CoPaw</strong> is a good option if you want a local AI assistant that spans multiple platforms and comes with a ready-made web interface.</p>
<h2 id="moltis"><a rel="nofollow noopener" target="_blank" href="https://moltis.org/">Moltis</a></h2>
<p><strong>Moltis</strong> is a self-hosted AI assistant built in Rust that runs as a single binary on your own machine. It works as a local hub that can connect to different AI providers while keeping everything private.</p>
<p>It includes long-term memory, voice support, and secure tool execution, and can be accessed through a web interface or apps like Telegram and Discord.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-alternatives/moltis.jpg" width="2000" height="1200" alt="Moltis desktop workspace"><figcaption>Moltis feels closer to a desktop workspace than a barebones agent runtime.</figcaption></figure>
<h3 id="moltis-pros">Pros</h3>
<ul>
<li><strong>Native desktop app</strong>: Moltis provides a dedicated desktop experience that feels more like a workstation than a simple chatbot.</li>
<li><strong>Voice support</strong>: It includes built-in voice support for hands-free interaction.</li>
<li><strong>GraphQL API</strong>: It offers a GraphQL API for developers who want programmatic access.</li>
</ul>
<h3 id="moltis-cons">Cons</h3>
<ul>
<li><strong>GUI reliance</strong>: Moltis is heavily GUI-driven. If you want a headless agent that runs quietly on a server, OpenClaw or Nanobot are better fits.</li>
</ul>
<p><strong>Moltis</strong> is best suited to people who prefer a polished desktop interface and workspace-style setup while still running everything locally.</p>
<h2 id="wrapping-up">Wrapping Up</h2>
<p>Since we last looked at OpenClaw, an entire ecosystem of alternatives has emerged. Many aim to cut the requirements from something that feels heavy to something that can run on a much smaller machine.</p>
<p>Here’s a quick breakdown of the key players we’ve covered to make the choices easier to compare:</p>
<table>
<thead>
<tr>
<th>Agent</th>
<th>Runtime</th>
<th>Size (Binary/Core)</th>
<th>RAM Usage</th>
<th>GUI Included</th>
</tr>
</thead>
<tbody>
<tr>
<td>OpenClaw</td>
<td>Node.js (TS)</td>
<td>~200 MB (dist/deps)</td>
<td>~1.5 GB+</td>
<td>Yes (Web)</td>
</tr>
<tr>
<td>Nanobot</td>
<td>Python</td>
<td>~3,500 LOC</td>
<td>~50-100 MB</td>
<td>No</td>
</tr>
<tr>
<td>NanoClaw</td>
<td>Node.js</td>
<td>&lt; 1 MB (Code)</td>
<td>~100 MB + Container</td>
<td>No</td>
</tr>
<tr>
<td>ZeroClaw</td>
<td>Rust (Native)</td>
<td>~3.4 MB</td>
<td>&lt; 5 MB</td>
<td>No</td>
</tr>
<tr>
<td>PicoClaw</td>
<td>Go (Native)</td>
<td>~8 MB</td>
<td>~10-45 MB</td>
<td>Yes (Minimal Web)</td>
</tr>
<tr>
<td>Moltis</td>
<td>Rust (Native)</td>
<td>~42 MB</td>
<td>~60-100 MB</td>
<td>Yes (Native Desktop)</td>
</tr>
<tr>
<td>IronClaw</td>
<td>Rust (Native)</td>
<td>~5 MB</td>
<td>N/A</td>
<td>Yes (Web/TUI)</td>
</tr>
<tr>
<td>OxiBot</td>
<td>Rust (Native)</td>
<td>~18 MB</td>
<td>&lt; 8 MB</td>
<td>No</td>
</tr>
<tr>
<td>NullClaw</td>
<td>Zig (Native)</td>
<td>678 KB</td>
<td>~1 MB</td>
<td>No</td>
</tr>
<tr>
<td>CoPaw</td>
<td>Python</td>
<td>~15 MB (Core)</td>
<td>~150-300 MB</td>
<td>Yes (Console/App)</td>
</tr>
<tr>
<td>MaxClaw</td>
<td>Go (Native)</td>
<td>N/A</td>
<td>N/A</td>
<td>Yes (Desktop)</td>
</tr>
</tbody>
</table><p>The post <a href="https://www.hongkiat.com/blog/openclaw-alternatives/">10 OpenClaw Alternatives Worth Trying</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74251</post-id>	</item>
		<item>
		<title>OpenClaw's Dreaming Feature Helps Your AI Remember Better</title>
		<link>https://www.hongkiat.com/blog/openclaw-dreaming-feature/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 11:34:00 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74285</guid>

					<description><![CDATA[<p>OpenClaw's Dreaming feature helps your AI keep useful context over time instead of forgetting everything between chats.</p>
<p>The post <a href="https://www.hongkiat.com/blog/openclaw-dreaming-feature/">OpenClaw&#x27;s Dreaming Feature Helps Your AI Remember Better</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>One of the easiest ways to notice an AI assistant's limits is this: it feels sharp today, then oddly forgetful tomorrow.</p>
<p>You explain a workflow, mention a preference, or spend half an hour giving useful context. The chat goes well. Then the next session starts and you are back to repeating yourself.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-dreaming-feature/openclaw-dreaming-mode.jpg" alt="OpenClaw Dreaming mode" width="1280" height="900"></figure>
<p>That is the problem OpenClaw's <a rel="nofollow noopener" target="_blank" href="https://docs.openclaw.ai/concepts/dreaming">Dreaming feature</a> is trying to solve. It is still marked as experimental at the time of writing, but the idea is already useful.</p>
<p>Instead of treating memory like a junk drawer, Dreaming periodically reviews what happened, keeps what matters, and leaves the rest behind. The goal is simple: help your AI remember the right things.</p>
<h2 id="why-dreaming">Why "Dreaming"?</h2>
<p>Because sleep is when the brain sorts things out.</p>
<p>We do not remember every tiny detail from the day. We keep the useful parts, drop the noise, and sometimes connect ideas in a way that only makes sense later.</p>
<p>OpenClaw is borrowing that idea.</p>
<p>When Dreaming runs, it reviews recent conversations, looks for what keeps coming up, and promotes only the stronger signals into longer-term memory. It also writes a plain-English summary of what it noticed.</p>
<p>That part matters.</p>
<p>A lot of AI memory features feel like black boxes. Dreaming is easier to trust because you can actually inspect what it decided was worth keeping.</p>
<h2 id="how-dreaming-works">How Dreaming Works</h2>
<p>At a high level, Dreaming runs on a schedule. The default is nightly at <strong>3 AM</strong>, though that can be customized.</p>
<p>The process is split into three phases.</p>
<h5 id="1-light-phase">1. Light Phase</h5>
<p>This is the cleanup pass.</p>
<p>Dreaming goes through recent conversations, removes duplicates, and organizes the noise before anything gets promoted.</p>
<h5 id="2-rem-phase">2. REM Phase</h5>
<p>This is the reflection pass.</p>
<p>The system looks for patterns, recurring topics, and things that keep coming up. If you keep talking about the same project or problem, that starts to matter.</p>
<h5 id="3-deep-phase">3. Deep Phase</h5>
<p>This is where promotion decisions happen.</p>
<p>The system scores what it found and only promotes the strongest memories into long-term storage, such as <code>MEMORY.md</code>.</p>
<p>That part matters because Dreaming is <strong>disposable by default</strong>. It is not trying to save everything forever. It is trying to keep what will still be useful later.</p>
<h2 id="the-dream-diary-is-the-best-part">The Dream Diary Is the Best Part</h2>
<p>After each cycle, OpenClaw writes a summary to <code>DREAMS.md</code>.</p>
<p>It is basically a short diary entry in plain English showing what the system noticed. Something like:</p>
<pre>"You've been talking a lot about network routing configurations lately. This looks like an ongoing project."</pre>
<p>That transparency makes the feature easier to trust.</p>
<p>Instead of asking you to believe a black box, OpenClaw shows its work. You can read the diary, inspect what it thinks is important, and decide whether the memory behavior feels sensible.</p>
<p>There is also a Dreams UI you can open with <strong>Ctrl + I</strong>, which gives you a nicer way to browse that information.</p>
<h2 id="how-to-turn-dreaming-on">How to Turn Dreaming On</h2>
<p>Dreaming is <strong>disabled by default</strong>, so you need to enable it explicitly.</p>
<p>You can do that in your OpenClaw config like this:</p>
<pre>{
  "plugins": {
    "entries": {
      "memory-core": {
        "config": {
          "dreaming": {
            "enabled": true
          }
        }
      }
    }
  }
}</pre>
<p>You can also customize the schedule if you do not want it running only once a night. For example, you might prefer a shorter interval like every six hours, depending on how heavily you use your agent.</p>
<p>If you would rather manage it from chat, OpenClaw also supports slash commands:</p>
<ul>
<li><code>/dreaming on</code></li>
<li><code>/dreaming off</code></li>
<li><code>/dreaming status</code></li>
<li><code>/dreaming help</code></li>
</ul>
<p>And if you want more control, there are CLI tools for previewing what would be promoted and understanding why something made the cut.</p>
<p>That is useful if you are the kind of person who likes seeing the scoring logic before trusting automation.</p>
<h2 id="why-this-matters">Why This Matters</h2>
<p>Without some kind of memory consolidation, AI assistants tend to become either forgetful or cluttered.</p>
<p>Forgetful is obvious: the assistant loses important context and becomes less useful over time.</p>
<p>Cluttered is the quieter problem. If a system saves too much, long-term memory fills up with noise. Old one-off details start sitting beside real preferences and active projects. The result is not intelligence. It is hoarding.</p>
<p>Dreaming aims for the middle ground: keep less, but keep the right things. If it works well, your AI becomes more useful over time because it retains context more cleanly.</p>
<h2 id="why-this-lands-at-the-right-time">Why This Lands at the Right Time</h2>
<p>This feature also lands at a good time.</p>
<p>A lot of developers have been complaining lately that <a href="https://www.hongkiat.com/blog/essential-cursor-editor-tips/">AI coding tools</a> feel less dependable than they used to. When that happens, memory matters more. If an assistant already makes mistakes, it feels even worse when it also forgets your setup, preferences, and ongoing work – especially in more hands-on setups like <a href="https://www.hongkiat.com/blog/lm-studio-ai-assistance-vs-code-setup/">local AI assistance in VS Code</a>.</p>
<p>Dreaming does not fix weak reasoning. But it can reduce one layer of friction by helping the agent stay grounded in the context that actually matters.</p><p>The post <a href="https://www.hongkiat.com/blog/openclaw-dreaming-feature/">OpenClaw&#x27;s Dreaming Feature Helps Your AI Remember Better</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74285</post-id>	</item>
		<item>
		<title>OpenClaw Telegram Bot Slow on Telegram? What’s Normal and What to Fix</title>
		<link>https://www.hongkiat.com/blog/openclaw-telegram-bot-slow/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Sun, 05 Apr 2026 13:00:10 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74249</guid>

					<description><![CDATA[<p>If your OpenClaw Telegram bot feels slow, some delay is normal. But if replies drag too long, a few simple fixes usually make a real difference.</p>
<p>The post <a href="https://www.hongkiat.com/blog/openclaw-telegram-bot-slow/">OpenClaw Telegram Bot Slow on Telegram? What&#8217;s Normal and What to Fix</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>If you’ve just set up an <a href="https://www.hongkiat.com/blog/configure-deepseek-openclaw/">OpenClaw bot on Telegram</a>, this is probably one of the first things you’ll notice.</p>
<p>It works. But it doesn’t always feel fast.</p>
<p>You send a message, wait a couple of seconds, then start wondering if something is broken. Sometimes the reply comes back quickly. Sometimes it takes 10 to 15 seconds. And sometimes it feels like the bot wandered off to make coffee before answering.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/openclaw-telegram-bot-slow/telegram-delayed-replies.jpg" alt="Telegram chat showing delayed bot replies" width="1200" height="675"><figcaption>A Telegram chat with visible bot reply delays and typing indicators</figcaption></figure>
<p>The short version: some delay is normal.</p>
<p>A Telegram bot backed by a full AI agent has more work to do than a typical chatbot. It has to receive the message, pass it through the gateway, load conversation history, decide whether tools are needed, generate the response, and send everything back through Telegram.</p>
<p>That stack takes time.</p>
<p>Still, not every delay is normal. When simple replies keep dragging, or the latency feels random, there’s usually something in the setup worth fixing.</p>
<h2 id="telegram-latency-normal">Is OpenClaw Telegram Latency Normal?</h2>
<p>Yes, to a point.</p>
<p>For most setups, the rough baseline looks like this:</p>
<ul>
<li><strong>Simple messages:</strong> around 2 to 5 seconds</li>
<li><strong>More complex requests:</strong> around 8 to 15 seconds</li>
<li><strong>Tool-heavy or long-context tasks:</strong> sometimes longer</li>
</ul>
<p>That’s typical for LLM-powered bots.</p>
<p>If you’re using a <a href="https://www.hongkiat.com/blog/local-llm-setup-optimization-lm-studio/">larger model</a>, sending reasoning-heavy prompts, or carrying a long chat history, the extra delay makes sense. These systems aren’t just matching rules and returning canned replies. They’re processing context, generating tokens, and sometimes calling tools along the way.</p>
<p>Where it stops feeling normal is when:</p>
<ul>
<li>basic replies regularly take more than 15 seconds</li>
<li>the bot stays on “typing…” for too long</li>
<li>responses sometimes jump to 30 to 60 seconds without a clear reason</li>
<li>the bot feels like it needs a manual wake-up</li>
</ul>
<p>That’s usually a sign that the slowdown is fixable.</p>
<h2 id="why-replies-are-slow">Why OpenClaw Telegram Replies Are Slow?</h2>
<p>The total response time usually comes from a few smaller delays stacked together.</p>
<h3 id="telegram-polling-drag">Telegram Polling Adds a Bit of Drag</h3>
<p>If you’re using long polling, OpenClaw has to keep checking Telegram for new messages.</p>
<p>That works, but it adds a bit of lag. Polling connections can also stall, which makes the bot feel sleepy until something nudges it again.</p>
<p>Webhooks are usually faster because Telegram pushes updates to your bot directly instead of waiting to be polled.</p>
<p>This usually isn’t the main problem.</p>
<p>But it does add drag.</p>
<h3 id="model-processing-cost">Model Processing Is Often the Biggest Cost</h3>
<p>This is usually where the real time goes.</p>
<p>The model needs to:</p>
<ul>
<li>read the conversation history</li>
<li>understand the prompt</li>
<li>decide what to do</li>
<li>generate the response token by token</li>
</ul>
<p>A short question with almost no history is cheap.</p>
<p>A long conversation, a heavyweight model, and a prompt that triggers tool use? Much less cheap.</p>
<p>That’s why the same bot can feel fast one moment and noticeably slower the next.</p>
<h3 id="partial-streaming-slower">Partial Streaming Can Make the Bot Feel Slower</h3>
<p>This one catches a lot of people.</p>
<p>If partial streaming is enabled, the bot may send tiny chunks as they arrive. That sounds faster on paper. In practice, it can feel slower because Telegram keeps showing typing indicators while the answer trickles in.</p>
<p>You’re technically seeing output earlier, but the overall experience feels more drawn out.</p>
<p>Not ideal.</p>
<h3 id="ipv6-causes-delays">IPv6 Issues Can Cause Huge Delays</h3>
<p>This is one of the most common causes of random slowdowns.</p>
<p>Node.js tends to prefer IPv6 first. If your VPS or hosting provider has flaky IPv6 routing to Telegram or your model provider, requests can hang before they fall back to IPv4.</p>
<p>The result is nasty.</p>
<p>Everything looks fine, but every request quietly pays a timeout penalty.</p>
<p>If your bot sometimes replies in a few seconds and other times takes nearly a minute, this is one of the first things I’d check.</p>
<h3 id="server-hosting-matter">Server and Hosting Still Matter</h3>
<p>Sometimes the answer is boring.</p>
<p>Low CPU, low RAM, poor routing, cold starts, or a server that’s simply far from your region can all add latency.</p>
<p>Not glamorous. Still real.</p>
<h3 id="long-chat-history">Long Chat History Slows Things Down</h3>
<p>Long conversations are expensive.</p>
<p>Persistent context is useful right up until it starts getting in the way. If every new message forces the model to chew through a giant backlog, even small replies start getting slower.</p>
<p>Convenient? Yes.</p>
<p>Free? Not even close.</p>
<h2 id="what-to-fix-first">What to Fix First</h2>
<p>If your OpenClaw Telegram bot feels slower than it should, try these fixes in order.</p>
<h3 id="check-streaming-mode">1. Check Your Streaming Mode</h3>
<p>If your Telegram config is using partial streaming, try disabling it or switching to full-response mode.</p>
<p>This is often the quickest win: replies feel snappier end to end.</p>
<p>In OpenClaw, draft streaming is controlled by <code>channels.telegram.streamMode</code> (<code>off</code>, <code>partial</code>, or <code>block</code>). Default is <code>partial</code>. Set it to <code>off</code> so Telegram gets one complete reply instead of draft-bubble updates (full-response style). Optional: <code>block</code> still uses drafts but refreshes in larger chunks than <code>partial</code>.</p>
<p>Edit your gateway config (for example <code>~/.openclaw/openclaw.json</code>), then restart the gateway:</p>
<pre>{
  "channels": {
    "telegram": {
      "streamMode": "off"
    }
  }
}</pre>
<p>If you later want partial draft streaming again, set <code>"streamMode": "partial"</code> (or remove the key so the default applies).</p>
<h3 id="force-ipv4-first">2. Force IPv4 First</h3>
<p>If your setup has flaky IPv6 routing, this can make a dramatic difference.</p>
<p>For systemd-based setups, the common fix looks like this:</p>
<pre>Environment="NODE_OPTIONS=--dns-result-order=ipv4first"</pre>
<p>Then reload and restart the gateway.</p>
<p>If your Telegram channel config supports <code>dnsResultOrder</code>, you can set the equivalent there too.</p>
<p>This isn’t one of those tiny tweaks that may or may not matter.</p>
<p>When IPv6 is the problem, this fix tends to hit immediately.</p>
<h3 id="reduce-context-bloat">3. Reduce Context Bloat</h3>
<p>If the bot gets slower over time, clean up the session.</p>
<p>A few simple habits help:</p>
<ul>
<li>use <code>/new</code> to start fresh</li>
<li>use <code>/compact</code> to shorten long histories</li>
<li>lower context limits in config if needed</li>
</ul>
<p>Not every chat needs to carry its full life story.</p>
<h3 id="try-faster-model">4. Try a Faster Model</h3>
<p>For everyday Telegram use, smaller and faster models often feel better.</p>
<p>If you’re using a large model for casual back-and-forth, you’re probably trading responsiveness for depth you don’t need on every message.</p>
<p>Use the bigger models when the task deserves it. Use the faster ones when you just want the bot to respond like a normal creature.</p>
<p>Good options to try: GPT-4o mini, Claude Haiku, or Gemini Flash on the cloud side; a compact Llama or Qwen locally. They’re built for speed, not benchmarks — which is the right trade-off for Telegram.</p>
<h3 id="use-webhooks">5. Use Webhooks If Your Setup Supports Them</h3>
<p>Polling is simpler.</p>
<p>Webhooks are usually faster.</p>
<p>If low latency matters to you, it’s worth testing a <a rel="nofollow noopener" target="_blank" href="https://docs.openclaw.ai/channels/telegram">webhook-based setup</a> to see whether it feels more responsive in real usage.</p>
<h3 id="check-boring-stuff">6. Check the Boring Stuff Too</h3>
<p>This part isn’t exciting, but it matters:</p>
<ul>
<li>make sure your server has enough CPU and RAM</li>
<li>host closer to your region if possible</li>
<li>keep OpenClaw updated</li>
<li>check logs for timeouts, fallback behavior, or repeated failures</li>
</ul>
<p>When a system feels slow, the logs are usually less confused than the human reading them.</p>
<h2 id="rule-of-thumb">A Simple Rule of Thumb</h2>
<p>If basic replies take a few seconds, that’s fine.</p>
<p>If heavier prompts take longer, also fine.</p>
<p>If trivial messages are consistently slow, or the latency feels random and exaggerated, start with these two checks first:</p>
<ol>
<li>streaming mode</li>
<li>IPv6 vs IPv4 behavior</li>
</ol>
<p>Those two cause a surprising amount of pain.</p>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>Some latency is part of the deal when you run a capable AI agent through Telegram.</p>
<p>That’s normal.</p>
<p>But long, frustrating delays usually aren’t something you have to accept. Most of the time, the cause is less dramatic than people think. It’s usually something ordinary and fixable:</p>
<ul>
<li>partial streaming that feels worse than it helps</li>
<li>bloated context</li>
<li>flaky IPv6 routing</li>
<li>a heavyweight model handling lightweight tasks</li>
</ul>
<p>Which is good news.</p>
<p>Because boring problems are usually easier to fix than mysterious ones.</p>
<p>And once you fix them, the bot starts feeling a lot less like a side project and a lot more like something you’d actually want to use every day.</p><p>The post <a href="https://www.hongkiat.com/blog/openclaw-telegram-bot-slow/">OpenClaw Telegram Bot Slow on Telegram? What&#8217;s Normal and What to Fix</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74249</post-id>	</item>
		<item>
		<title>Use Siri to Message Your Telegram Bot (Yes, Even on CarPlay)</title>
		<link>https://www.hongkiat.com/blog/use-siri-message-telegram-bot/</link>
		
		<dc:creator><![CDATA[Hongkiat Lim]]></dc:creator>
		<pubDate>Sun, 05 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Mobile]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74271</guid>

					<description><![CDATA[<p>A neat Telegram iOS trick lets Siri and CarPlay open a bot chat by voice, with no shortcuts or extra apps involved.</p>
<p>The post <a href="https://www.hongkiat.com/blog/use-siri-message-telegram-bot/">Use Siri to Message Your Telegram Bot (Yes, Even on CarPlay)</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Telegram shipped a small but genuinely useful update to their iOS app not too long ago. I first came across it while browsing the official <a rel="nofollow noopener" target="_blank" href="https://github.com/TelegramMessenger/Telegram-iOS">Telegram iOS GitHub repo</a>, and it turns out you can now create a contact in iOS that lets Siri (and CarPlay) send messages to your Telegram bot by voice. No third-party apps, no shortcuts, no workarounds.</p>
<p>I use this to ping my OpenClaw bot hands-free while driving. It just works.</p>
<h2 id="bot-peer-id">Bot Peer ID First</h2>
<p>Before setting things up, you need your bot’s Peer ID. It’s buried in your bot’s API token.</p>
<p>Open the chat with your bot in Telegram. If you created the bot via @BotFather, the confirmation message includes the API token, something like <code>851234567:AA...</code>. The Peer ID is the number before the colon. In that example, <code>851234567</code>.</p>
<p>If you can no longer find that message, open <strong>@BotFather</strong>, select your bot, tap <strong>API Token</strong>, and Telegram will show the token again.</p>
<p>Write it down. You’ll need it in a second.</p>
<figure>
  <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/use-siri-message-telegram-bot/telegram-bot-peer-id.jpg" alt="Telegram bot peer ID" width="972" height="1760"><figcaption>The bot token shown in Telegram, with the Peer ID being the number before the colon.</figcaption></figure>
<h2 id="create-contact">Create the Contact</h2>
<p>Open the Contacts app on your iPhone. Tap the <strong>+</strong> button to add a new contact.</p>
<p>Pick a short name, something Siri can recognise easily and unambiguously. I went with “Claw”, but “OpenClaw” or “MyBot” works just as well. Avoid names that sound like other contacts.</p>
<p>Scroll down, tap <strong>add url</strong>, change the label to <strong>Telegram</strong> (this part matters, so don’t skip it), then paste:</p>
<pre>https://t.me/@oid&lt;PEER_ID&gt;</pre>
<p>Replace <code>&lt;PEER_ID&gt;</code> with your actual number. So if your Peer ID is <code>851234567</code>, the full URL becomes:</p>
<pre>https://t.me/@oid851234567</pre>
<p>Save the contact.</p>
<figure>
  <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/use-siri-message-telegram-bot/create-contact.jpg" alt="Create Telegram contact" width="972" height="1760"><figcaption>Add a contact, switch the URL label to Telegram, and paste in the bot link with your Peer ID.</figcaption></figure>
<h2 id="one-prerequisite">One Prerequisite</h2>
<p>Make sure you’re using the official Telegram app on iPhone and that it’s reasonably up to date. This trick relies on Telegram’s iOS contact integration, so if the Telegram URL label does not show up in Contacts, update the app first.</p>
<p>The bot won’t respond unless you’ve already sent it at least one message from your account. If you’ve never chatted with it before, open the bot in Telegram, hit Start or send any message, then come back.</p>
<h2 id="using-it">Using It</h2>
<ul>
<li>Say <strong>“Hey Siri, message Claw on Telegram”</strong>, using whatever name you gave the contact.</li>
<li>Siri opens the Telegram chat with your bot and is ready for your next voice command.</li>
<li>On CarPlay, the same contact appears in your app list and responds to your car’s voice assistant.</li>
</ul>
<p>It’s the same mechanism, just carried over to a driving context.</p>
<h2 id="what-works">What Actually Works</h2>
<ul>
<li>Any Telegram bot you have an existing chat with, not just OpenClaw bots.</li>
<li>The contact name needs to be short and distinct. Naming a contact “Test” or “Bot” will confuse Siri.</li>
<li>If you rename the contact later, update the Siri phrase accordingly.</li>
</ul><p>The post <a href="https://www.hongkiat.com/blog/use-siri-message-telegram-bot/">Use Siri to Message Your Telegram Bot (Yes, Even on CarPlay)</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74271</post-id>	</item>
		<item>
		<title>Gemma 4 Just Dropped. Can Your Computer Handle It?</title>
		<link>https://www.hongkiat.com/blog/run-gemma-4-locally/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Sat, 04 Apr 2026 17:47:06 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74275</guid>

					<description><![CDATA[<p>Gemma 4 is here, and the real question is not hype. It is whether your laptop or desktop can run it locally without pain.</p>
<p>The post <a href="https://www.hongkiat.com/blog/run-gemma-4-locally/">Gemma 4 Just Dropped. Can Your Computer Handle It?</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Google DeepMind released <a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/" rel="nofollow noopener" target="_blank">Gemma 4</a> on April 2, 2026, and it looks like their most ambitious open model family so far.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/run-gemma-4-locally/gemma-4-model-family.jpg" alt="Gemma 4 models" width="1830" height="858"></figure>
<p>On paper, it checks a lot of boxes: long context windows, multimodal input, strong reasoning, broad language support, and an Apache 2.0 license that makes it easier to use in real projects without weird restrictions hanging over your head.</p>
<p>But most people asking about Gemma 4 are not starting with the license.</p>
<p>They are asking a simpler question:</p>
<p><strong>Can I actually run this on my own computer?</strong></p>
<p>The short answer is yes.</p>
<p>And more interestingly, you probably can without needing some absurd server rack in the corner of your room.</p>
<p>Gemma 4 comes in a few sizes, from smaller models that should be comfortable on laptops and edge devices, all the way up to much larger variants that make more sense on high-end GPUs or machines with plenty of unified memory. So whether you are just curious, privacy-minded, or trying to run models locally for coding, testing, or agent workflows, there is likely a version that fits.</p>
<p>If you are new to this whole setup, <a href="https://www.hongkiat.com/blog/top-ai-apps-for-local-use/">these apps for running AI locally</a> are a useful starting point before you go deeper into model sizes and hardware tradeoffs.</p>
<p>In this post, I will walk through what Gemma 4 is, which model sizes are available, what kind of hardware you will need, and the easiest ways to run it locally.</p>
<h2 id="what-is-gemma-4">What Is Gemma 4?</h2>
<p>Gemma is Google DeepMind’s family of open-weight models built from the same research direction behind Gemini. Earlier releases already had a decent reputation among people who like running models locally, mainly because they delivered more than you would expect for their size.</p>
<p>Gemma 4 pushes that further.</p>
<p>At launch, the lineup includes four variants:</p>
<ul>
<li><strong>Gemma 4 E2B</strong>: a small model aimed at lightweight devices</li>
<li><strong>Gemma 4 E4B</strong>: a more capable small model that should be the sweet spot for many people</li>
<li><strong>Gemma 4 26B A4B</strong>: a Mixture-of-Experts model with only a smaller portion active per token</li>
<li><strong>Gemma 4 31B</strong>: the largest dense model in the family</li>
</ul>
<p>Google positions the family as multimodal, with native vision and audio support, along with long context windows that scale up to 256K on the larger models. It also supports over 140 languages, which makes it more interesting for global use than models that mainly feel tuned for English-first workflows.</p>
<p>The practical takeaway is this: Gemma 4 is not just another open model release for benchmark watchers. It is meant to be usable.</p>
<p>That matters.</p>
<p>Because the moment a model becomes easy to run locally, it stops being just a research headline and starts becoming part of real workflows.</p>
<h2 id="run-locally">Can You Run Gemma 4 Locally?</h2>
<p>Yes. That is one of the most appealing things about this release.</p>
<p>The smaller Gemma 4 variants are meant for local and edge use, so you do not need elite hardware just to try them. If you have run other local models through <a href="https://www.hongkiat.com/blog/ollama-ai-setup-guide/">Ollama</a>, LM Studio, <code>llama.cpp</code>, or Transformers, the setup here will feel familiar.</p>
<ul>
<li><strong>Ollama</strong> if you want the fastest way from zero to running model</li>
<li><strong>LM Studio</strong> if you prefer clicking over terminals</li>
<li><strong>Hugging Face + Transformers</strong>, <strong>llama.cpp</strong>, or <strong>vLLM</strong> if you want more control</li>
<li><strong>Kaggle</strong> if you want access through Google’s own ecosystem</li>
</ul>
<p>Once downloaded, local use also means the obvious benefits kick in: better privacy, offline access, and less dependency on API pricing or rate limits.</p>
<p>That alone will be enough to pull in a lot of developers.</p>
<h2 id="hardware-needs">Can Your Computer Handle It?</h2>
<p>This is where things get real.</p>
<p>A model may be open, but that does not automatically mean it will run well on your computer. The main limiting factor is memory, especially if you want decent speed and longer context windows.</p>
<p>Here are the approximate base memory requirements for Gemma 4 weights:</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>BF16 / FP16</th>
<th>8-bit</th>
<th>4-bit</th>
</tr>
</thead>
<tbody>
<tr>
<td>Gemma 4 E2B</td>
<td>9.6 GB</td>
<td>4.6 GB</td>
<td>3.2 GB</td>
</tr>
<tr>
<td>Gemma 4 E4B</td>
<td>15 GB</td>
<td>7.5 GB</td>
<td>5 GB</td>
</tr>
<tr>
<td>Gemma 4 26B A4B</td>
<td>48 GB</td>
<td>25 GB</td>
<td>15.6 GB</td>
</tr>
<tr>
<td>Gemma 4 31B</td>
<td>58.3 GB</td>
<td>30.4 GB</td>
<td>17.4 GB</td>
</tr>
</tbody>
</table>
<p>That is just the model weights. Real usage needs extra headroom for context, KV cache, and runtime overhead, so it is smarter to treat those numbers as the floor, not the target.</p>
<p>Here is the practical version.</p>
<h3 id="gemma-4-e2b">Gemma 4 E2B</h3>
<p>This is the lightweight option. In 4-bit form, it should be workable on modest hardware and even CPU-heavy setups. If you just want to test prompts, tinker offline, or run something locally without stressing your computer, this is the easiest entry point.</p>
<h3 id="gemma-4-e4b">Gemma 4 E4B</h3>
<p>This will probably be the sweet spot for most people. It is small enough to be practical, but large enough to feel more useful for everyday local work. If you are on an M-series Mac or a midrange NVIDIA GPU, this is likely the version to try first.</p>
<h3 id="gemma-4-26b-a4b">Gemma 4 26B A4B</h3>
<p>This is where things start getting more serious. Because it is a Mixture-of-Experts model, it may be more efficient than the raw parameter count suggests, but it still wants real hardware. A high-end GPU or a well-specced Mac Studio makes much more sense here.</p>
<h3 id="gemma-4-31b">Gemma 4 31B</h3>
<p>This is the big one. If you want the best quality in the family, this is probably where you look. But if you are hoping to run it comfortably, you will want a strong GPU and enough VRAM to avoid a miserable experience.</p>
<p>If you are unsure which version to try, start with 4-bit quantization. It usually gives the best balance between quality, speed, and not making your hardware regret your decisions.</p>
<p>If storage is part of the problem, this guide on <a href="https://www.hongkiat.com/blog/ollama-llm-from-external-drive/">running Ollama models from an external drive</a> is worth bookmarking.</p>
<h2 id="run-gemma-4">How to Run Gemma 4 Locally</h2>
<p>The easiest option for most people is still Ollama.</p>
<h3 id="with-ollama">Run Gemma 4 with Ollama</h3>
<p>First, install Ollama from <a href="https://ollama.com/download" rel="nofollow noopener" target="_blank">ollama.com/download</a>.</p>
<p>Then run:</p>
<pre><code>ollama run gemma4</code></pre>
<p>That pulls the default E4B variant, which is roughly a 9 to 10 GB download.</p>
<p>If you want a specific model size, use one of these instead:</p>
<pre><code>ollama run gemma4:e2b
ollama run gemma4:26b-a4b
ollama run gemma4:26b
ollama run gemma4:31b</code></pre>
<p>Once it starts, you can chat with it directly in the terminal, much like you would with any other local model in Ollama. If you want to go further, this walkthrough on <a href="https://www.hongkiat.com/blog/vision-enabled-models-ollama-guide/">vision-enabled models in Ollama</a> is a good companion once you are comfortable with the basics.</p>
<p>If you are building apps or tools around it, Ollama also exposes an OpenAI-compatible API at:</p>
<pre><code>http://localhost:11434</code></pre>
<p>That makes it easy to plug Gemma 4 into existing local workflows without rebuilding everything from scratch.</p>
<h3 id="with-lm-studio">Prefer a GUI? Use LM Studio</h3>
<p>If you do not want to touch the terminal, LM Studio is the friendlier option.</p>
<ol>
<li>Download LM Studio from <a href="https://lmstudio.ai" rel="nofollow noopener" target="_blank">lmstudio.ai</a></li>
<li>Search for Gemma 4</li>
<li>Pick the quantized version you want</li>
<li>Download it and start chatting</li>
</ol>
<p>If you want a broader look at the tool itself, this post on <a href="https://www.hongkiat.com/blog/local-llm-setup-optimization-lm-studio/">running LLMs locally with LM Studio</a> covers the setup in more detail.</p>
<h3 id="for-developers">For Developers</h3>
<p>If you want more control, Gemma 4 models are also available through Hugging Face.</p>
<ul>
<li><code>google/gemma-4-E2B-it</code></li>
<li><code>google/gemma-4-E4B-it</code></li>
<li><code>google/gemma-4-26B-A4B-it</code></li>
<li><code>google/gemma-4-31B-it</code></li>
</ul>
<p>From there, you can run them using:</p>
<ul>
<li>Transformers</li>
<li><code>llama.cpp</code></li>
<li>GGUF builds</li>
<li>vLLM</li>
<li>Unsloth</li>
</ul>
<p>That route makes more sense if you care about custom serving, benchmarking, quantization experiments, or fitting the model into your own stack.</p>
<h2 id="should-you-try-it">So, Should You Try It?</h2>
<p>If you are curious about local AI, yes.</p>
<p>Not because every model release deserves a standing ovation, but because Gemma 4 seems to hit a useful middle ground: open, capable, and available in sizes that make local experimentation realistic.</p>
<p>That matters more than flashy launch claims.</p>
<p>A model family becomes interesting when normal people can actually run it. Gemma 4 looks like one of those releases.</p>
<p>And if you have got a halfway decent laptop or desktop, there is a good chance you can start today.</p><p>The post <a href="https://www.hongkiat.com/blog/run-gemma-4-locally/">Gemma 4 Just Dropped. Can Your Computer Handle It?</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74275</post-id>	</item>
		<item>
		<title>Anthropic Just Closed the OpenClaw Subscription Loophole</title>
		<link>https://www.hongkiat.com/blog/anthropic-closed-openclaw-loophole/</link>
		
		<dc:creator><![CDATA[Hongkiat Lim]]></dc:creator>
		<pubDate>Sat, 04 Apr 2026 05:47:16 +0000</pubDate>
				<category><![CDATA[Toolkit]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74269</guid>

					<description><![CDATA[<p>Anthropic has closed the OpenClaw subscription loophole, pushing heavy third-party agent use onto extra usage billing instead.</p>
<p>The post <a href="https://www.hongkiat.com/blog/anthropic-closed-openclaw-loophole/">Anthropic Just Closed the OpenClaw Subscription Loophole</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Anthropic has officially ended the practice of using Claude Pro, Max, or Team subscriptions to power high-volume automated agents through tools like OpenClaw at flat-rate pricing.</p>
<p>Starting at 12pm PT on April 4, 2026, Anthropic began enforcing a policy change. Third-party “harnesses” like OpenClaw no longer qualify for subscription usage limits. Heavy or automated workloads now trigger separate pay-as-you-go “extra usage” billing at full API rates. For agentic tasks that consume millions of tokens, this means bills that can run into hundreds or thousands of dollars on what used to be a $20 to $200/month subscription.</p>
<figure>
  <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/anthropic-closed-openclaw-loophole/claude-email.jpg" alt="Claude pricing email" width="1067" height="852"><figcaption>The email Anthropic sent to affected users about subscription limits and extra usage billing.</figcaption></figure>
<p>This wasn’t sudden. Back in January 2026, Anthropic quietly blocked some of the technical paths that made this workaround possible. February brought a formal terms update explicitly stating that OAuth tokens from consumer subscriptions are only for official Anthropic interfaces (Claude.ai, Claude Code, and so on). Their official <a rel="nofollow noopener" target="_blank" href="https://platform.claude.com/docs/en/agent-sdk/overview">Agent SDK overview</a> now says third-party developers are not allowed to offer claude.ai login or subscription rate limits for their own products unless Anthropic approved it beforehand. April 4 is when the enforcement teeth came in.</p>
<p>OpenClaw (also known historically as Clawdbot or Moltbot) is a popular open-source AI agent framework that enables persistent, autonomous agents. For months, a significant chunk of its user base had been routing their subscription OAuth tokens through OpenClaw to run sophisticated workflows far beyond typical chat usage. Effectively, a consumer plan was being used as a subsidized backend for agent swarms.</p>
<h2 id="can-you-still-use">Can You Still Use Claude with OpenClaw?</h2>
<p>Yes, but your costs will be higher for anything beyond light use.</p>
<p>Third-party harness usage now pulls from “extra usage” billing instead of your included subscription limits. Some community workarounds are being tested, routing through the local Claude CLI, but these may still incur extra charges and carry ToS risks. A one-time credit equivalent to one month of subscription is being provided as a transition buffer for affected users.</p>
<p>The official line is now in Anthropic’s docs, not just community screenshots and email quotes. OAuth from Free, Pro, Max, or Team accounts should not be used in third-party products or services.</p>
<h2 id="what-this-means">What This Means in Practice</h2>
<p><strong>Cost.</strong> Workloads that previously ran efficiently on a $20 to $200/month subscription are now generating substantial extra usage bills. Some users were reportedly burning token volumes equivalent to thousands of dollars monthly, at standard API pricing.</p>
<p><strong>Infrastructure.</strong> Anthropic cited capacity strain from high-volume agents overloading consumer-tier systems designed for individual use.</p>
<p><strong>Where heavy users go from here.</strong> Subscriptions remain positioned for human-centric, official interfaces. Commercial or agentic use is directed toward proper API keys or Anthropic’s own tools like Claude Code. OpenClaw users are exploring model switches to OpenAI, Minimax, Kimi, or local options.</p>
<p>The subscription hack for running fleets of agents at discounted rates is over.</p>
<h2 id="public-reaction">The Public Reaction</h2>
<p>The announcement sparked backlash and debate across Reddit, Hacker News, and X.</p>
<p>On Reddit’s r/ClaudeAI, <a rel="nofollow noopener" target="_blank" href="https://www.reddit.com/r/ClaudeAI/comments/1r9v27c/all_the_openclaw_bros_are_having_a_meltdown_after/">“All the OpenClaw bros are having a meltdown after the Anthropic subscription lock-down”</a> captured the vibe. Comments described OpenClaw agents as wasteful, token-burning, and a “clobbered together mess.” Community consensus leaned toward Anthropic being right to crack down. One top comment: “Anthropic banned subscription OAuth tokens from all third-party tools, forcing users to either stick with Claude Code only or pay 5 to 10 times more via API keys.”</p>
<p>On Hacker News, reactions were more mixed. In <a rel="nofollow noopener" target="_blank" href="https://news.ycombinator.com/item?id=47633396">“Tell HN: Anthropic no longer allowing Claude Code subscriptions to use OpenClaw”</a>, one commenter noted: “Using the chatbot subscription as an API was a weird loophole. I don’t feel betrayed.” Another pointed to misaligned incentives: “The organization selling the per-token model doesn’t have the incentive, at scale, to have you reduce token consumption.” A sharper take: “OpenClaw literally brought them customers at the door and now they get them off their platform, with a strong feeling of dislike or hatred towards Anthropic.”</p>
<p>On X, reactions ranged from angry to analytical. <a rel="nofollow noopener" target="_blank" href="https://x.com/Scobleizer/status/2024536367913718271">Robert Scoble called it a fumble</a>, while another take was simply: <a rel="nofollow noopener" target="_blank" href="https://x.com/jordymaui/status/2037163206259433637">Anthropic wasn’t trying to kill OpenClaw</a>.</p>
<h2 id="my-take">My Take</h2>
<p>Anthropic’s rationale is straightforward. Consumer subscriptions were never designed to subsidize unlimited autonomous agent swarms, and the terms of service were always there. Enforcement just came faster than some expected.</p>
<p>Whether this strengthens their premium positioning or drives users toward competitors will play out over the coming months. For now, if you rely on OpenClaw with Claude, review your authentication setup, monitor for extra usage charges, and have a backup model lined up.</p>
<p>The era of cheap subscription fuel for third-party automation is over. Adjust accordingly.</p><p>The post <a href="https://www.hongkiat.com/blog/anthropic-closed-openclaw-loophole/">Anthropic Just Closed the OpenClaw Subscription Loophole</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74269</post-id>	</item>
		<item>
		<title>X Just Added a Thumbs Down Button for Replies</title>
		<link>https://www.hongkiat.com/blog/x-thumbs-down-replies/</link>
		
		<dc:creator><![CDATA[Hongkiat Lim]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 17:00:41 +0000</pubDate>
				<category><![CDATA[Social Media]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74247</guid>

					<description><![CDATA[<p>X has added a private thumbs-down button for replies. Small change on paper, but it could make messy threads a lot easier to read.</p>
<p>The post <a href="https://www.hongkiat.com/blog/x-thumbs-down-replies/">X Just Added a Thumbs Down Button for Replies</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I’ve been thinking about this because, honestly, X threads have become harder to read than they used to be.</p>
<p>Not because the main posts are worse.</p>
<p>Because the replies are often a mess.</p>
<p>You open a post that looks interesting, scroll down, and get hit with the usual mix: spam, ragebait, low-effort jokes, off-topic noise, and increasingly, AI-generated filler that somehow says a lot while contributing absolutely nothing.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/x-thumbs-down-replies/x-busy-thread-low-quality-replies.jpg" alt="Busy X reply thread" width="1200" height="675"><figcaption>A packed X reply thread.</figcaption></figure>
<p>So X adding a thumbs-down button doesn’t feel random.</p>
<p>It feels overdue.</p>
<p>The new feature is simple: X has added a thumbs-down button, but only for replies and comments under a post, not for the main post itself. That detail matters. This isn’t Reddit-style downvoting across the whole platform. It’s a quieter feedback tool aimed at cleaning up the part of X that usually needs the most help.</p>
<p>From what we’ve seen so far, the rollout started around March 18, 2026. That’s backed up by <a rel="nofollow noopener" target="_blank" href="https://x.com/grok/status/2034370218911211616">Grok’s announcement on X</a>, which says the feature had “just rolled out today” after a user request and Nikita Bier’s “give me 60 seconds” reply. Bier is X’s Head of Product, so this wasn’t just a random viral response. It was a pretty public glimpse into how quickly product experiments can move on the platform.</p>
<p>Fast product iteration is great.</p>
<p>Whether the feature actually improves the platform is the more interesting question.</p>
<h2 id="why-x-added">Why X Added the Thumbs Down Button</h2>
<p>The main timeline on X still runs on the usual signals: likes, reposts, replies, and bookmarks. There’s no visible downvote count on the original post, which is probably the right call.</p>
<p>The real problem has never been people liking posts too much.</p>
<p>It’s the reply section.</p>
<p>That’s where threads get clogged with low-quality content, bot replies, AI slop, promotional junk, people who clearly arrived just to make the experience worse for everyone else, and the endless pile of replies asking <code>@grok</code> questions that were already answered in the post they’re replying to.</p>
<p>The thumbs-down button looks like X’s attempt to deal with that without turning every conversation into a public dislike contest.</p>
<p>And I get the logic.</p>
<p>A private negative signal is a lot easier to work with than a public one. People can flag a reply as bad, irrelevant, misleading, or spammy without creating another performative metric for everyone to chase.</p>
<p>In theory, that gives X a cleaner ranking signal. Better replies rise. Worse ones sink. The thread becomes more readable without adding a giant public scoreboard of disapproval.</p>
<p>That’s the pitch, anyway.</p>
<h2 id="why-replies-only">Why Limiting It to Replies Makes Sense</h2>
<p>If X had added public downvotes to main posts, the whole thing would have turned into a mess almost instantly.</p>
<p>You’d get brigading.</p>
<p>You’d get coordinated dislike attacks.</p>
<p>You’d get people treating downvotes as identity warfare instead of feedback.</p>
<p>By limiting the feature to replies, X keeps the main post focused on distribution and engagement while using the reply layer as a quality-control zone.</p>
<p>That’s a much smarter place to experiment.</p>
<p>It also matches how people already use X. Most users aren’t asking for a public thumbs-down button on every post they disagree with. What they actually want is a way to bury the reply that adds nothing, derails the thread, or reads like it was assembled by a low-budget reply bot trained on engagement bait.</p>
<p>Private feedback is a cleaner mechanism for that.</p>
<h2 id="how-it-works">How the Thumbs Down Button Works</h2>
<p>If the feature has rolled out to your account, you’ll see a thumbs-down icon on individual replies.</p>
<p>It usually appears between the like button and the bookmark icon.</p>
<figure><img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/x-thumbs-down-replies/x-thumbs-down-reply-button.jpg" alt="X reply thumbs-down button" width="800" height="450"><figcaption>The thumbs-down button in the reply menu.</figcaption></figure>
<p>Tap it, and you’ll get a reply feedback prompt asking what’s wrong with the reply. The wording is along the lines of:</p>
<pre>Tell us what is wrong with this reply. Your feedback is private.</pre>
<p>That last part is important.</p>
<p>The dislike count isn’t public. The author doesn’t get a direct notification saying their reply was thumbs-downed. This isn’t meant to shame people in public. It’s meant to feed ranking signals back into the system.</p>
<p>At least for now, the feature also appears to be rolling out gradually, with verified users and Premium subscribers getting it first. That makes sense if X wants to reduce obvious bot abuse while testing the signal quality.</p>
<p>As of writing, the button appears to be showing up mainly on mobile, not the desktop web version of X. That could change quickly, but right now the rollout looks uneven across both accounts and platforms.</p>
<p>Server-side rollout means some eligible users won’t see it immediately.</p>
<p>So if you don’t have it yet, that doesn’t necessarily mean you’re missing something.</p>
<h2 id="what-happens-next">What Happens When You Tap It</h2>
<p>This is where the feature becomes more interesting than a generic dislike button.</p>
<p>You’re not just saying, “I don’t like this reply.”</p>
<p>You’re telling X <em>why</em>.</p>
<p>And that matters because the platform can use those signals differently depending on the category.</p>
<p>Here are the main feedback options users are seeing:</p>
<h3 id="not-interested">1. Not Interested in This Post</h3>
<p>This is for replies that feel irrelevant, low-value, or just not worth seeing.</p>
<p>Not abusive. Not dangerous. Just bad.</p>
<p>That kind of signal is useful because a lot of the worst replies on X aren’t technically violations. They’re just noise.</p>
<h3 id="incorrect-misleading">2. Incorrect or Misleading</h3>
<p>This option is for replies that contain wrong information, misleading claims, or content that may deserve closer scrutiny.</p>
<p>Some users also report that this path can connect with Community Notes workflows, which makes sense. If enough people flag the same kind of misinformation, X has a reason to look harder at it.</p>
<h3 id="ai-generated">3. AI-Generated</h3>
<p>This might end up being the most used option of the bunch.</p>
<p>If you’ve spent any time in large threads lately, you’ve probably seen the pattern: polished-looking replies that say nothing, fake agreement replies, generic summaries, weirdly enthusiastic bots, and a flood of synthetic engagement that makes the whole thread feel hollow.</p>
<p>Giving people a direct way to flag that is probably one of the more practical things X has done in a while.</p>
<h3 id="spam">4. Spam</h3>
<p>This one’s straightforward.</p>
<p>Promotional junk, repetitive replies, scammy messages, phishing attempts, and all the usual garbage that shows up whenever a platform still has enough reach to be worth exploiting.</p>
<h3 id="report-post">5. Report Post</h3>
<p>This is the escalation path.</p>
<p>If the reply crosses into <a href="https://www.hongkiat.com/blog/minimize-online-harassment/">harassment, hate speech</a>, impersonation, or clear rule violations, this option hands the issue off to X’s formal reporting flow instead of just using it as a ranking signal.</p>
<p>That’s a useful distinction.</p>
<p>Not every bad reply should be treated the same.</p>
<p>Some deserve demotion. Others deserve review.</p>
<h2 id="what-this-improves">What This Could Improve</h2>
<p>If X gets the ranking right, this feature could make reply sections a lot more readable.</p>
<p>That’s not a small thing.</p>
<p>One of the reasons long threads feel worse now isn’t that the original posts are always bad. It’s that the replies often bury the interesting conversation under layers of low-effort sludge.</p>
<p>A private thumbs-down button gives X a way to sort for signal instead of just volume.</p>
<p>That could lead to a few obvious improvements:</p>
<ul>
<li>fewer spammy and low-value replies near the top</li>
<li>less visibility for AI-generated filler</li>
<li>better discussion quality in high-traffic threads</li>
<li>less incentive for reply farming and ragebait tactics</li>
</ul>
<p>And because dislike counts stay private, the whole system should be less dramatic than a visible downvote counter.</p>
<p>In theory, anyway.</p>
<p>The real risk is whether people use it thoughtfully, and whether X can separate useful feedback from coordinated misuse.</p>
<p>That’s always the hard part.</p>
<p>Also worth noting: feature rollouts tend to attract scams.</p>
<p>If you get an email or message claiming you need to verify your account to unlock the new dislike feature, treat that with suspicion. X feature launches are noisy enough on their own. Scammers love that kind of confusion.</p>
<h2 id="my-take">My Take</h2>
<p>This feels like one of those small product changes that could have a surprisingly large effect.</p>
<p>Not because the button itself is revolutionary.</p>
<p>It isn’t.</p>
<p>But because reply quality has become one of the biggest reasons X feels worse to use than it used to.</p>
<p>If this helps bury spam, demote AI slop, and reduce the visibility of junk replies without turning every thread into a public downvote circus, that’s a good trade.</p>
<p>The strange part?</p>
<p>What users probably wanted wasn’t more public feedback.</p>
<p>They wanted better conversations.</p>
<p>And this might actually be a more sensible way to get there.</p><p>The post <a href="https://www.hongkiat.com/blog/x-thumbs-down-replies/">X Just Added a Thumbs Down Button for Replies</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74247</post-id>	</item>
		<item>
		<title>How to Use Grok on Mac – Complete Guide</title>
		<link>https://www.hongkiat.com/blog/install-grok-mac/</link>
		
		<dc:creator><![CDATA[Hongkiat Lim]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 13:00:40 +0000</pubDate>
				<category><![CDATA[Desktop]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74232</guid>

					<description><![CDATA[<p>Grok is an AI assistant from xAI (Elon Musk’s company). It’s similar to chatbots like ChatGPT, Claude, and others. You can ask questions and have conversations with it. Much of what Grok knows comes from X (formerly Twitter), so it stays pretty current. People usually use Grok in a few places: on the X site&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/install-grok-mac/">How to Use Grok on Mac &#8211; Complete Guide</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Grok is an AI assistant from xAI (Elon Musk’s company). It’s similar to chatbots like ChatGPT, Claude, and others. You can ask questions and have conversations with it. Much of what Grok knows comes from X (formerly Twitter), so it stays pretty current.</p>
<p>People usually use Grok in a few places: on the X site at <strong>x.com/grok</strong>, on <strong>grok.com</strong>, or in the X app on their phone.</p>
<h2>Does Grok have an app for Mac?</h2>
<p>Right now there’s no standalone Grok app for Mac, and no X app for Mac either. X and Grok exist as apps on iPhone and iPad, but not on macOS.</p>
<p>You can still use Grok on your Mac with a few workarounds. Here’s how.</p>
<h2>Why use Grok on your Mac?</h2>
<p>Having Grok on your Mac means you don’t have to open x.com, log in, and hunt for Grok every time. You can jump straight into a chat instead of going through the X site each time.</p>
<h2>How to install Grok on Mac</h2>
<h3>1. Use Raycast</h3>
<p><a rel="nofollow noopener" target="_blank" href="https://www.raycast.com">Raycast</a> is a popular Mac launcher and productivity tool. You bring it up with a shortcut (usually <kbd>Option</kbd> + <kbd>Space</kbd>), then type to search apps, run commands, and more.</p>
<p><strong>Raycast AI</strong> is built in. You can pick from a bunch of AI models in one place, including Grok. You can switch between providers and models inside Raycast AI.</p>
<p>Free users get a 50-message trial and can try any of the Pro models, including Grok. You can test it before paying.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-grok-mac/raycast.jpg" width="1534" height="1273" alt="Raycast AI launcher on Mac with Grok and other AI models in the command palette">
</figure>
<p>The Grok models currently available in Raycast include:</p>
<ul>
<li>Grok-4 (and variants such as Grok-4 Fast and Grok-4 Reasoning)</li>
<li>Grok-3 Beta and Grok-3 Mini Beta</li>
<li>Grok-4.1 Fast</li>
</ul>
<h3>2. Use Fello AI</h3>
<p><strong>Fello.ai</strong> is another option in the same vein as Raycast. It’s a desktop app for Mac (and iOS/iPadOS) where you can talk to several AI models in one place: ChatGPT, Claude, Gemini, DeepSeek, and Grok.</p>
<p>You switch models from a menu. You can pin chats, search history, and upload files. One app instead of juggling a bunch of AI sites.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/install-grok-mac/felloai.jpg" width="2378" height="1436" alt="Fello AI app on Mac with ChatGPT, Claude, Gemini, DeepSeek and Grok in one interface">
</figure>
<p>Fello AI has a free tier. You get a limited number of questions per hour (the limit varies) that include Grok. The count resets every hour. On the free tier you don’t get PDF or image analysis; that’s on paid plans.</p>
<p>If you need more questions or file support, you’ll need a subscription. Paid plans are weekly ($3.99), monthly ($9.99), or yearly ($79.99). Check the Mac App Store or <a rel="nofollow noopener" target="_blank" href="https://felloai.com">felloai.com</a> for current pricing.</p>
<ol>
<li>Search for Fello AI in the App Store, install it, and open the app. Click outside the subscription screen if you want to try the free tier first.</li>
<li>In the main window, open the Model menu (top right) and choose Grok.</li>
<li>Type your question in the text field. To attach a file (e.g. for summarization), use Attach Files and press <kbd>Return</kbd>.</li>
</ol>
<h3>3. Use macos-grok-overlay</h3>
<p><a rel="nofollow noopener" target="_blank" href="https://github.com/tchlux/macos-grok-overlay">macos-grok-overlay</a> is a small open-source app that puts <strong>grok.com</strong> in its own window. You open or hide it with a keyboard shortcut. It’s a dedicated Grok window that floats on top of your other apps. You don’t need a browser tab open.</p>
<p>The app runs in the background and uses <kbd>Option</kbd> + <kbd>Space</kbd> to show or hide the Grok window. Log in to grok.com once in that window. After that, Grok is one key combo away.</p>
<p>You can install it two ways. Easiest: download the <a rel="nofollow noopener" target="_blank" href="https://github.com/tchlux/macos-grok-overlay">DMG from the GitHub repo</a> and drag the app into Applications (works on both Intel and Apple Silicon).</p>
<p>Or, if you have Python and pip, use the steps below. The first time you launch it, macOS will ask for Accessibility permission so the app can listen for <kbd>Option</kbd> + <kbd>Space</kbd>. You have to allow that or the shortcut won’t work.</p>
<ol>
<li>In Terminal, run: <code>python3 -m pip install macos-grok-overlay</code> then press <kbd>Return</kbd>.</li>
<li>To start it at login, run: <code>macos-grok-overlay --install-startup</code> and press <kbd>Return</kbd>.</li>
<li>Launch the app, grant Accessibility when prompted, then log in to grok.com in the window that opens. After that, press <kbd>Option</kbd> + <kbd>Space</kbd> anytime to show or hide the Grok overlay.</li>
</ol>
<h3>4. Use Ollama and run Grok locally</h3>
<p><strong>Ollama</strong> lets you run large language models (LLMs) on your own Mac or PC. Install it, pull the models you want, then chat from the terminal or a web UI. No xAI or X account is needed. For a full walkthrough (including Open WebUI), see <a href="https://www.hongkiat.com/blog/run-offline-chat-assistant/">How to Run Chat Assistant that Works Offline</a>.</p>
<p>Here you download a Grok model and run it locally. Community-quantized Grok 2 models are on Ollama, e.g. <a rel="nofollow noopener" target="_blank" href="https://ollama.com/MichelRosselli/grok-2">MichelRosselli/grok-2</a>. After installing Ollama, pull and run a variant with:</p>
<p><code>ollama run MichelRosselli/grok-2</code></p>
<p>Heads up: these Grok 2 models are huge. Even smaller quantized builds are around 82 to 100GB. Full-quality can be 160GB or more.</p>
<p>You’ll want a strong machine: at least <strong>32GB RAM</strong> (64GB or more for the bigger quants), <strong>100GB+ free storage</strong>, and ideally a high-end GPU with plenty of VRAM (e.g. 24GB+) for faster inference. On Apple Silicon, unified memory helps; 64GB or 96GB is more realistic for the big models.</p>
<p>If your Mac only has 8 to 16GB RAM and limited storage, this route is probably not worth it. The options above are much lighter.</p>
<h2>Alternative ways to run Grok on your Mac</h2>
<p>As of this writing there’s no word on a native Grok or X app for Mac. The four methods above all need a third-party app or a big model download.</p>
<p>If you want to use Grok on your Mac with zero installs, the browser is a solid option. It’s the fastest.</p>
<p>Log in to X in your browser and open <strong>x.com/grok</strong>. Then make it easy to get back: bookmark the page (<kbd>Cmd</kbd> + <kbd>D</kbd> in Safari or Chrome). In <strong>Safari</strong> you can also add the page to your Dock so it opens like an app. With x.com/grok open, go to <strong>File</strong> → <strong>Add to Dock</strong>. A Grok icon shows up in the Dock; click it whenever you want Grok without digging through bookmarks.</p>
<ol>
<li>In your browser, go to <strong>x.com</strong> and log in.</li>
<li>Open <strong>x.com/grok</strong>.</li>
<li>Bookmark the page (<kbd>Cmd</kbd> + <kbd>D</kbd>), or in Safari use <strong>File</strong> → <strong>Add to Dock</strong> so you can open Grok with one click from the Dock.</li>
</ol>
<h2>Conclusion</h2>
<p>There’s no official Grok or X app for Mac yet, but you’ve got options: launchers like Raycast, a dedicated Grok window (macos-grok-overlay), all-in-one apps like Fello AI, or running Grok locally with Ollama if your machine is up to it. Or just use x.com/grok in the browser and add it to your Dock.</p>
<p>Pick whatever fits how you work and how much you’re okay installing.</p>
<h2 id="frequently-asked-questions">Frequently Asked Questions</h2>
<div class="faq">
<h3>Is there an official Grok app for Mac?</h3>
<p>Not yet. xAI hasn’t released a standalone Grok app or an X desktop app for Mac. You can still use Grok via the methods above: Raycast, Fello AI, macos-grok-overlay, Ollama, or the browser.</p>
<h3>Do I need an X (Twitter) account or subscription to use Grok?</h3>
<p>It depends how you’re using it. x.com/grok and macos-grok-overlay need you to log in with an X account. Access may depend on your X subscription.</p>
<p>Standalone grok.com and some third-party apps (e.g. Fello AI, Raycast AI) can use different sign-in or trials. Check each service.</p>
<h3>Which method should I choose?</h3>
<p>Zero installs: use the browser and add x.com/grok to your Dock or bookmarks. Already use Raycast or want one shortcut for lots of AIs: try Raycast AI.</p>
<p>Want a dedicated Grok window with <kbd>Option</kbd> + <kbd>Space</kbd>? Use macos-grok-overlay. Want several AIs including Grok in one app with a free tier? Try Fello AI. Have a powerful Mac and want Grok fully offline? Look at Ollama and a local Grok model.</p>
<h3>Can I use Grok for free?</h3>
<p>Partly. Raycast AI and Fello AI have free tiers or trials that include Grok. x.com/grok and grok.com depend on your X or xAI account and whatever subscription rules apply at the time.</p>
<p>Running Grok via Ollama is free, but you need enough RAM and storage (and ideally a decent GPU) for the model files.</p>
</div><p>The post <a href="https://www.hongkiat.com/blog/install-grok-mac/">How to Use Grok on Mac &#8211; Complete Guide</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74232</post-id>	</item>
		<item>
		<title>Auto-Organize Mac Screenshots into Folder and Rename with AI</title>
		<link>https://www.hongkiat.com/blog/organize-mac-screenshots-ai-rename/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Thu, 19 Feb 2026 13:00:24 +0000</pubDate>
				<category><![CDATA[Desktop]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74227</guid>

					<description><![CDATA[<p>Mac’s built-in screenshot tool has two major issues: Screenshots save to your Desktop by default, creating clutter Filenames are based on timestamps (e.g., “Screen Shot 2026-01-09 at 2.30.00 PM.png“), which aren’t descriptive This post will show you how to solve both problems by automatically saving screenshots to a custom folder with meaningful, descriptive filenames. Why&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/organize-mac-screenshots-ai-rename/">Auto-Organize Mac Screenshots into Folder and Rename with AI</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Mac’s built-in screenshot tool has two major issues:</p>
<ul>
<li>Screenshots save to your Desktop by default, creating clutter</li>
<li>Filenames are based on timestamps (e.g., “<code>Screen Shot 2026-01-09 at 2.30.00 PM.png</code>“), which aren’t descriptive</li>
</ul>
<p>This post will show you how to solve both problems by automatically saving screenshots to a custom folder with meaningful, descriptive filenames.</p>
<hr>
<h2>Why does this matter?</h2>
<p>Over time, you’ll accumulate tons of screenshots with unhelpful filenames. Finding the right one becomes difficult. You might tell yourself you’ll organize and rename them later, but let’s be honest – that rarely happens.</p>
<p>Eventually, the clutter becomes overwhelming and you delete them in bulk, often losing screenshots you actually need.</p>
<figure>
      <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/messy-looking-screenshot-files.jpg" alt="Mac desktop cluttered with disorganized screenshot files" width="1273" height="971"><figcaption>A Mac desktop cluttered with untidy, auto-named screenshot files.</figcaption></figure>
<hr>
<h2>What we’re building</h2>
<p>Here’s what we’ll do using Mac’s native features (no additional apps required): Every time you take a screenshot, your Mac will automatically save it to a custom folder, analyze the content, and give it a short, descriptive filename.</p>
<p>Basically, you’ll turn messy screenshot files like these (left) into organized, descriptive ones like these (right).</p>
<figure>
      <img decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/before-after-comparison.jpg" alt="Before and after comparison of Mac screenshot file organization"><figcaption>Before (left): Messy, default-named screenshots. After (right): Automatically organized and clearly labeled screenshots.</figcaption></figure>
<hr>
<h2>Getting started</h2>
<p>This setup takes about 5-10 minutes and requires Mac’s Terminal and Shortcuts apps.</p>
<p><strong>System requirements:</strong></p>
<ul>
<li>Apple silicon Mac (M1, M2, or later)</li>
<li>Latest macOS version</li>
<li>Apple Intelligence enabled</li>
</ul>
<p>Now, let’s get started.</p>
<hr>
<h2>1. Specify a folder for screenshots</h2>
<p>First, let’s configure macOS to automatically save all screenshots to a specific folder.</p>
<p><strong>Steps:</strong></p>
<ol>
<li>Create a folder named “Screenshots” (or any name you prefer)</li>
<li>Open the Terminal app</li>
<li>Type the following command followed by a space (don’t press Enter yet):
<pre><code>defaults write com.apple.screencapture </code></pre>
<figure>
            <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/paste-command.jpg" alt="Terminal command for changing Mac screenshot location" width="1648" height="388">
         </figure>
</li>
<li>Drag and drop your folder from Finder into the Terminal window – this automatically fills in the folder path
<figure>
            <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/drag-drop-folder.jpg" alt="Dragging folder from Finder into Mac Terminal window" width="1648" height="388">
         </figure>
</li>
<li>Press Enter</li>
<li>Apply the changes by running:
<pre><code>killall SystemUIServer</code></pre>
</li>
</ol>
<p>That’s it! Exit the Terminal app and take a few test screenshots. They should now save automatically to your designated folder.</p>
<hr>
<h2>2. Building the automation</h2>
<p>Here’s what we’re building: whenever a screenshot is saved to your folder, it will be automatically analyzed and renamed with a meaningful, descriptive filename.</p>
<p><strong>Steps:</strong></p>
<ol>
<li>Launch the Shortcuts app</li>
<li>Create a new Automation: Go to <strong>File &gt; New Automation</strong>, select <strong>Folder</strong>, then click <strong>Next</strong>
         <video width="100%" height="auto" controls><source src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/new-automation.mp4" type="video/mp4">Your browser does not support the video tag.</source></video>
      </li>
<li>In the next screen:
<ul>
<li>Click <strong>Choose Folder</strong>, navigate to your screenshot folder, and click <strong>Select</strong></li>
<li>Under “When Any Items Is”, check <strong>Added</strong></li>
<li>Check <strong>Run Immediately</strong></li>
</ul>
<figure>
            <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/the-when-screen.jpg" alt="Mac Shortcuts automation folder trigger settings" width="1979" height="1531">
         </figure>
</li>
<li>Verify everything is correct (see screenshot below), then click <strong>Next</strong></li>
</ol>
<p>On the next screen, click “New Shortcut” to continue, and then you’ll be directed to the shortcut’s interface.</p>
<figure>
      <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/new-shortcut.jpg" alt="Mac Shortcuts app new automation interface" width="1999" height="1520">
   </figure>
<h5>Follow these below steps carefully to build the shortcut:</h5>
<p><strong>1. Add <strong>“Repeat with Each”</strong> action:</strong></p>
<ul>
<li>In <strong>“Search Actions”</strong> (top right), search for <strong>“Repeat with Each”</strong> and double-click to add it</li>
<figure>
         <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/repeat-with-each.jpg" alt="Adding Repeat with Each action in Mac Shortcuts" width="2391" height="676">
      </figure>
<li>Click the blue <strong>“Items”</strong>, select <strong>“Shortcut Input”</strong> from the dropdown </li>
<li>Click <strong>“Shortcut Input”</strong> again, then select <strong>“Added files”</strong>
<figure>
            <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/shortcut-input-added-files.jpg" alt="Selecting Added files as Shortcut Input in Mac Shortcuts" width="2394" height="1029">
         </figure>
</li>
</ul>
<p><strong>2. Add <strong>“Get Images from Input”</strong> action:</strong></p>
<ul>
<li>Search for <strong>“Get Images from Input”</strong> and double-click to add it</li>
<li>Drag it between <strong>“Repeat with…”</strong> and <strong>“End Repeat”</strong></li>
<p>      <video width="100%" height="auto" controls><source src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/get-image-from-input.mp4" type="video/mp4">Your browser does not support the video tag.</source></video>
   </p></ul>
<p>Check your shortcut against the screenshot below to verify it’s correct.</p>
<figure>
      <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/check-shorcuts.jpg" alt="Mac Shortcuts workflow with Get Images and Repeat actions" width="1796" height="945">
   </figure>
<p><strong>3. Add <strong>“Use Model”</strong> action:</strong></p>
<ul>
<li>Search for <strong>“Use Model”</strong> and add it to the panel
<figure>
            <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/use-model.jpg" alt="Adding Apple Intelligence Use Model action in Mac Shortcuts" width="2391" height="860">
         </figure>
</li>
<li>Click the textbox containing <strong>“Images”</strong>, clear it, and paste this prompt:</li>
</ul>
<pre>Analyze the image and provide a short, simple descriptive name based on these rules:
* <strong>"Length:"</strong> Use between 3 to 5 words.
* <strong>"Format:"</strong> Do not end the description with a period (.).
* <strong>"Style:"</strong> Keep it clear, concise, and descriptive of the main subject.</pre>
<ul>
<li>At the end of the textbox, right-click and select <strong>“Insert Variable &gt; Images”</strong></li>
</ul>
<p>   <video width="100%" height="auto" controls><source src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/add-prompt.mp4" type="video/mp4">Your browser does not support the video tag.</source></video></p>
<p><strong>4. Add <strong>“Rename File”</strong> action:</strong></p>
<ul>
<li>Search for <strong>“Rename File”</strong> and add it above <strong>“End Repeat”</strong>
<figure>
            <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/rename-file.jpg" alt="Adding Rename File action in Mac Shortcuts automation" width="2387" height="1124">
         </figure>
</li>
<li>Right-click <strong>“Response”</strong> and change it to <strong>“Repeat Item”</strong></li>
<li>Right-click <strong>“Name”</strong> and change it to <strong>“Response”</strong></li>
</ul>
<p>   <video width="100%" height="auto" controls><source src="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/change-response-and-time.mp4" type="video/mp4">Your browser does not support the video tag.</source></video></p>
<p>That’s it! Take a test screenshot. The first time, you’ll see a permission prompt – click <strong>Okay</strong> to allow access to your screenshot folder.</p>
<p>Take a few more screenshots and watch them automatically get renamed with descriptive filenames in your designated folder.</p>
<hr>
<h2>Final thoughts</h2>
<p>If the renamed screenshots aren’t exactly what you want, feel free to edit the prompt. With this automation quietly working in the background, your screenshots will be better organized, easier to search via Spotlight, and easier to identify at a glance.</p><p>The post <a href="https://www.hongkiat.com/blog/organize-mac-screenshots-ai-rename/">Auto-Organize Mac Screenshots into Folder and Rename with AI</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		<enclosure length="858432" type="video/mp4" url="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/new-automation.mp4"/>
<enclosure length="272943" type="video/mp4" url="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/get-image-from-input.mp4"/>
<enclosure length="769066" type="video/mp4" url="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/add-prompt.mp4"/>
<enclosure length="660780" type="video/mp4" url="https://assets.hongkiat.com/uploads/organize-mac-screenshots-ai-rename/change-response-and-time.mp4"/>

		<post-id xmlns="com-wordpress:feed-additions:1">74227</post-id>	</item>
		<item>
		<title>Which Creative Cloud Plan Fits Your Photoshop Use?</title>
		<link>https://www.hongkiat.com/blog/adobe-creative-cloud-plans-photoshop/</link>
		
		<dc:creator><![CDATA[Hongkiat.com]]></dc:creator>
		<pubDate>Mon, 16 Feb 2026 13:00:08 +0000</pubDate>
				<category><![CDATA[Photoshop]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74225</guid>

					<description><![CDATA[<p>Adobe’s plans can be confusing. If you’re only interested in using Photoshop, you might often wonder about the following: Best plan for the price? Am I overpaying for apps I don’t use? How many AI / Generative Fill credits do I get? Do I need the full suite? As a Photoshop user, Adobe’s marketing can&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/adobe-creative-cloud-plans-photoshop/">Which Creative Cloud Plan Fits Your Photoshop Use?</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Adobe’s plans can be confusing. If you’re only interested in using Photoshop, you might often wonder about the following:</p>
<ul>
<li><em>Best plan for the price?</em></li>
<li><em>Am I overpaying for apps I don’t use?</em></li>
<li><em>How many AI / Generative Fill credits do I get?</em></li>
<li><em>Do I need the full suite?</em></li>
</ul>
<p>As a Photoshop user, Adobe’s marketing can feel overwhelming. Their website looks slick but hides crucial details about what you’re actually getting. Prices vary by region, AI credit allocations change without warning, and the differences between plans are buried in footnotes.</p>
<p>This guide is specifically for Photoshop users like you. All information is as updated as November 2025. </p>
<p> By the time you finish reading, you’ll know:</p>
<ul>
<li><a href="#plans">The 4 plans that include Photoshop</a></li>
<li><a href="#plan-comparison">Photoshop plan vs Pro: what’s different?</a></li>
<li><a href="#ai-features">AI credits by plan</a></li>
<li><a href="#credits-depleted">When you run out of credits</a></li>
<li><a href="#downgrade-plan">Downgrading without losing your work</a></li>
</ul>
<p>No promos, no affiliate links, no upsell. Just the facts Photoshop users need to make the right choice and stop overpaying.</p>
<p>Let’s look at the only plans that include Photoshop.</p>
<hr>
<h2 id="plans">The 4 Plans That Include Photoshop</h2>
<p>Here’s what you need to know about the plans that actually include Photoshop (prices shown are the standard monthly rate when paid annually, before local taxes):</p>
<table>
<thead>
<tr>
<th>Plan</th>
<th>Price / Month(paid yearly)</th>
<th>Main Apps Included</th>
<th>Cloud Storage</th>
<th>Monthly Generative AI Credits</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creative Cloud Pro</td>
<td>US$59.99</td>
<td>All 20+ apps (Photoshop, Illustrator, Premiere, After Effects, InDesign, etc.)</td>
<td>1 TB</td>
<td>4,000 + unlimited standard generations</td>
</tr>
<tr>
<td>Creative Cloud All Apps</td>
<td>US$54.99 – $59.99</td>
<td>Same 20+ apps as Pro</td>
<td>100 GB</td>
<td>1,000</td>
</tr>
<tr>
<td>Photoshop Single App Plan</td>
<td>US$22.99</td>
<td>Photoshop + Lightroom (limited) + Fresco</td>
<td>100 GB</td>
<td>25–500 (depends on join date)</td>
</tr>
<tr>
<td>Photography Plan (1 TB)</td>
<td>US$19.99</td>
<td>Photoshop + full Lightroom + Lightroom Classic</td>
<td>1 TB</td>
<td>250–1,000 (depends on join date)</td>
</tr>
</tbody>
</table>
<h4>What Photoshop users need to know:</h4>
<ul>
<li>You get the exact same Photoshop application in all four plans; there’s no “lite” version in cheaper plans.</li>
<li>The main differences are in AI credits, cloud storage, and whether you get Lightroom Classic.</li>
<li>If you only use Photoshop, the Single App plan saves you over $400/year compared to Pro.</li>
<li>For photographers who edit in both Photoshop and Lightroom Classic, the Photography Plan is actually cheaper than the Photoshop-only plan.</li>
</ul>
<hr>
<h2 id="photoshop-only">Photoshop-Only Plan: What You Get</h2>
<p>If you’re primarily a Photoshop user who rarely touches other Adobe apps, this is the plan designed specifically for you. It’s officially called:</p>
<p><strong>Photoshop Single App Plan – US$22.99/mo (paid annually) or US$34.49/mo month-to-month</strong></p>
<p>What you actually get:</p>
<ul>
<li>Full desktop Photoshop (exactly the same version as the US$60 plans)</li>
<li>Lightroom (web and mobile versions only, not the full Lightroom Classic)</li>
<li>Lightroom Classic is NOT included</li>
<li>Adobe Fresco and Portfolio</li>
<li>100 GB cloud storage</li>
<li>Generative AI features inside Photoshop (Generative Fill, Generative Expand, Text-to-Image)</li>
</ul>
<h5>Important catch:</h5>
<p>Generative AI credits on this plan are tiny compared to the big plans.</p>
<ul>
<li>If you subscribed before the recent policy change → you still get 500 credits/month</li>
<li>If you subscribed after the recent policy change → you only get 25 credits/month</li>
</ul>
<p>25 credits means roughly 25 Generative Fills or 25 AI-generated images per month. Once they’re gone, the features don’t stop working, but they become noticeably slower (Adobe puts you in the “slow queue”).</p>
<p><strong>Bottom line:</strong> If you only need Photoshop and you’ll use Generative Fill lightly (a few times a week at most), the Photoshop Single App plan is the clear winner. You’ll save ~US$450–500 per year compared to Creative Cloud Pro.</p>
<hr>
<h2 id="ai-features">AI Credits by Plan</h2>
<p>Here’s exactly what Photoshop users get with each plan – no marketing fluff:</p>
<table>
<thead>
<tr>
<th>Plan</th>
<th>Monthly Generative AI Credits</th>
<th>Unlimited “Standard” Generations?</th>
<th>Real-World Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creative Cloud Pro</td>
<td>4,000</td>
<td>Yes</td>
<td>You can generate thousands of images or ~40 five-second AI videos per month without ever slowing down.</td>
</tr>
<tr>
<td>Creative Cloud All Apps</td>
<td>1,000</td>
<td>No</td>
<td>Good for moderate daily use; heavy users hit the limit in 10–20 days.</td>
</tr>
<tr>
<td>Photoshop Single App Plan</td>
<td>25 (new subscribers) or 500 (long-term subscribers)</td>
<td>No</td>
<td>New users: ~25 AI actions per month. Old users: still decent.</td>
</tr>
<tr>
<td>Photography Plan (1 TB)</td>
<td>250–1,000 (depends on when you joined)</td>
<td>No</td>
<td>Usually enough for photographers who use Generative Fill occasionally.</td>
</tr>
</tbody>
</table>
<h4>What counts as 1 credit? (Adobe’s official rates)</h4>
<ul>
<li>1 credit → 1 standard Generative Fill / Expand / Text-to-Image (up to 2048×2048 px)</li>
<li>25–100 credits → 1 short AI video clip (3–5 seconds in Premiere or Firefly)</li>
<li>4–8 credits → 1 high-resolution or complex generation</li>
</ul>
<h5>Pro tip:</h5>
<p>Creative Cloud Pro is the only individual plan that gives “unlimited standard generations” on top of the 4,000 credits. That means basic Text-to-Image and Generative Fill never slow down, even if you somehow burn through all 4,000 credits (almost impossible for one person).</p>
<hr>
<h2 id="ai-credits-usage">What Those Credits Get You</h2>
<p>Numbers on a screen don’t mean much until you see real examples. Here’s what the monthly allowance actually translates to in daily life (using Adobe’s current credit costs):</p>
<h5>Creative Cloud Pro – 4,000 credits + unlimited standard</h5>
<ul>
<li>≈ 4,000 normal Generative Fills or text-to-image generations</li>
<li>≈ 160 five-second AI video clips (if you use the expensive video tools)</li>
<li>≈ 1,000 higher-quality or 4K upscale generations</li>
</ul>
<p><strong>Real life:</strong> You can literally generate images all day, every day and never hit a wall. Most solo creators never use more than 1,000–1,500 credits in a month.</p>
<h5>All Apps – 1,000 credits</h5>
<ul>
<li>≈ 1,000 regular images or fills</li>
<li>≈ 40 short AI videos</li>
</ul>
<p><strong>Real life:</strong> Fine for full-time designers or YouTubers who use AI daily but not obsessively. Heavy users run out around day 20.</p>
<h5>Photoshop Plan (new subscriber) – 25 credits</h5>
<ul>
<li>Exactly 25 Generative Fills or 25 text-to-image generations per month</li>
</ul>
<p><strong>Real life:</strong> About one AI edit per day if you spread it out. After that, everything switches to the slow queue (can take 30–60 seconds instead of 3–5 seconds).</p>
<h5>Photoshop Plan (old subscriber) – 500 credits</h5>
<ul>
<li>Still 500 decent-quality generations per month</li>
</ul>
<p><strong>Real life:</strong> Plenty for most months unless you’re doing client work with heavy AI.</p>
<h5>Photography Plan – 250 to 1,000 credits</h5>
<ul>
<li>Enough for removing backgrounds, extending skies, or object removal on hundreds of photos per month.</li>
</ul>
<p><strong>Bottom line:</strong> If you ever find yourself thinking “I wish Generative Fill was faster” or “I hit my limit again,” you’re on the wrong plan.</p>
<hr>
<h2 id="plan-comparison">Photoshop Plan vs Pro: What’s Different?</h2>
<table>
<thead>
<tr>
<th></th>
<th>Photoshop Single App Plan(~US$22.99/mo)</th>
<th>Creative Cloud Pro(~US$59.99/mo)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Full Photoshop (desktop)</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Lightroom Classic</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td> Illustrator, Premiere, After Effects, InDesign, etc. </td>
<td>No</td>
<td>Yes (20+ apps)</td>
</tr>
<tr>
<td>Cloud storage</td>
<td>100 GB</td>
<td>1 TB</td>
</tr>
<tr>
<td>Generative AI credits / month</td>
<td>25 (new subscribers) or 500 (long-term subscribers)</td>
<td>4,000 + unlimited standard</td>
</tr>
<tr>
<td>Speed after credits run out</td>
<td>Slow queue(30–60 sec per generation)</td>
<td>Never slows down(unlimited standard)</td>
</tr>
<tr>
<td>Best for</td>
<td> Casual users, students, hobbyists who only edit in Photoshop </td>
<td> Full-time creators, designers, video editors, heavy AI users </td>
</tr>
</tbody>
</table>
<h4>Quick verdict in plain English:</h4>
<p>Choose <strong>Photoshop Plan</strong> if:</p>
<ul>
<li>You open Photoshop 99% of the time</li>
<li>You use Generative Fill a few times a week at most</li>
<li>You want to save ~US$450 a year</li>
</ul>
<p>Choose <strong>Creative Cloud Pro</strong> if:</p>
<ul>
<li>You regularly use 3+ Adobe apps</li>
<li>You generate dozens (or hundreds) of AI images/videos every month</li>
<li>You hate waiting even 10 extra seconds for a generation</li>
<li>You need the extra 900 GB of cloud storage</li>
</ul>
<p>Most people who think they “need” Pro actually don’t. Try the Photoshop Plan first. You can upgrade anytime in under 2 minutes with zero downtime.</p>
<hr>
<h2 id="check-credits">Check Your AI Credits (30 Seconds)</h2>
<p>You don’t have to guess how many credits you have left; Adobe shows the exact number in two easy places.</p>
<h4>Method 1: Fastest (works on any device)</h4>
<ol>
<li>Open a browser and go to <a href="https://account.adobe.com" rel="nofollow noopener" target="_blank">https://account.adobe.com</a></li>
<li>Sign in with your Adobe ID</li>
<li>Click your profile picture (top-right corner)</li>
<li>Look for the box that says Generative credits – it shows something like “2,847 of 4,000 remaining” and a progress bar</li>
<li>(It updates in real time and resets on your billing date.)</li>
</ol>
<h4>Method 2: Inside Photoshop or Firefly</h4>
<ul>
<li>In Photoshop: Help menu → Generative Credits (takes you to the same page)</li>
<li>In Firefly web (<a href="https://firefly.adobe.com" rel="nofollow noopener" target="_blank">firefly.adobe.com</a>): click your avatar (top-right) → the counter is right there</li>
</ul>
<p>That’s literally it. No digging through menus, no support chat needed.</p>
<h5>Pro tip:</h5>
<p>Bookmark <a href="https://account.adobe.com/plans" rel="nofollow noopener" target="_blank">account.adobe.com/plans</a> and check it once a week if you’re a heavy AI user. Many people get a nasty surprise at the end of the month because they never looked.</p>
<hr>
<h2 id="credits-depleted">When You Run Out of Credits</h2>
<p>Adobe does not completely lock you out of Generative AI when your monthly credits hit zero, but things definitely change:</p>
<table>
<thead>
<tr>
<th>Situation</th>
<th>What Actually Happens</th>
</tr>
</thead>
<tbody>
<tr>
<td>You’re on Creative Cloud Pro</td>
<td>Almost nothing. You still have unlimited standard generations (basic Text-to-Image, Generative Fill, etc.). Only very premium features (long videos, 4K upscales, third-party models) stop until next month.</td>
</tr>
<tr>
<td>You’re on All Apps, Photoshop Plan, or Photography Plan</td>
<td>Features keep working, but they move to the slow priority queue. A generation that normally took 3–8 seconds now takes 20–90 seconds. Some advanced options (e.g., video generation in Premiere) become completely unavailable until credits reset.</td>
</tr>
<tr>
<td>You need it right now</td>
<td>You can instantly buy an add-on pack (100 credits for ~US$4.99, 2,000 credits for ~US$29.99, etc.) and the fast speed returns immediately.</td>
</tr>
</tbody>
</table>
<h4>In practice:</h4>
<ul>
<li>Casual users on the Photoshop Plan rarely notice the slowdown because they only do a handful of generations per month.</li>
<li>Heavy users feel it instantly: waiting a minute for every Generative Fill kills the workflow.</li>
</ul>
<p>The slowdown is Adobe’s way of nudging you toward either Creative Cloud Pro or the credit add-on packs.</p>
<hr>
<h2 id="buying-credits">Buying Extra Credits: Worth It?</h2>
<p>If you ever run out and don’t want the slow queue (or you’re on Pro but somehow burned through 4,000 credits doing video), Adobe sells add-on credit packs. These are monthly subscriptions (not one-time purchases) and stack on top of your plan’s included credits.</p>
<p>Current prices (in USD before local tax):</p>
<table>
<thead>
<tr>
<th>Pack Size</th>
<th>Monthly Price</th>
<th>Cost per 1,000 Generations</th>
<th>Best For</th>
</tr>
</thead>
<tbody>
<tr>
<td>100 credits</td>
<td>US$4.99</td>
<td>~US$50</td>
<td>Emergency top-up only</td>
</tr>
<tr>
<td>2,000 credits</td>
<td>US$29.99</td>
<td>~US$15</td>
<td>Heavy Photoshop/AI users on cheaper plans</td>
</tr>
<tr>
<td>7,000 credits</td>
<td>US$79.99</td>
<td>~US$11.40</td>
<td>Full-time creators who refuse to upgrade to Pro</td>
</tr>
<tr>
<td>50,000 credits</td>
<td>~US$299</td>
<td>~US$6</td>
<td>Agencies or absolute power users</td>
</tr>
</tbody>
</table>
<h4>Real-world math most people care about:</h4>
<ul>
<li>If you find yourself buying the 2,000-credit pack every month, you’re spending ~US$30 extra → total US$53/mo on a Photoshop Plan. At that point, just upgrade to Creative Cloud Pro (US$59.99) and get unlimited + all apps.</li>
<li>The 100-credit pack is almost never worth it; it’s the most expensive per generation by far.</li>
</ul>
<p>You can turn these add-ons on/off anytime from <a href="https://account.adobe.com" rel="nofollow noopener" target="_blank">account.adobe.com</a> → Plans & Products → Manage plan → Add-ons.</p>
<hr>
<h2 id="downgrade-plan">Downgrading Without Losing Your Work</h2>
<p>Yes, you can drop from Creative Cloud Pro (or All Apps) down to the cheaper Photoshop Single App plan at any time, and you won’t lose files, presets, brushes, or Lightroom catalogs. Here’s the safe way to do it:</p>
<ol>
<li>Wait until the day after your current billing cycle ends (If you cancel mid-cycle on an annual plan, Adobe will still charge you the 50% early-termination fee on the remaining months, so wait.)</li>
<li>Log in at <a href="https://account.adobe.com/plans" rel="nofollow noopener" target="_blank">https://account.adobe.com/plans</a></li>
<li>Find your active plan → click Manage plan → Change plan</li>
<li>Choose Photoshop (the single-app option, US$22.99/mo)</li>
<li>Pick the same billing cycle you’re on now (annual or monthly)</li>
<li>Confirm. The change takes effect at the end of your current paid period</li>
</ol>
<h4>What stays the same:</h4>
<ul>
<li>All your cloud files (they just cap at 100 GB instead of 1 TB; anything over 100 GB stays accessible but you can’t add more until you’re under the limit)</li>
<li>All installed apps continue working until the downgrade date</li>
<li>Lightroom Classic catalogs, Photoshop brushes, presets, fonts: everything stays on your computer</li>
</ul>
<h4>What you lose immediately on downgrade day:</h4>
<ul>
<li>Access to Illustrator, Premiere, After Effects, InDesign, etc.</li>
<li>The extra 900 GB of cloud storage</li>
<li>The 4,000 (or 1,000) AI credits → you drop to the Photoshop Plan’s 25–500 credits</li>
</ul>
<h5>Pro tip:</h5>
<p>Before you downgrade, download anything important from Creative Cloud storage to your hard drive or another cloud service. Once you’re under 100 GB, you’re golden forever.</p>
<hr>
<h2 id="decision-guide">30-Second Decision Guide</h2>
<p>Find your situation below to instantly see which plan matches your actual Photoshop usage:</p>
<table>
<thead>
<tr>
<th>Your Situation</th>
<th>Best Plan for You</th>
<th>Why</th>
</tr>
</thead>
<tbody>
<tr>
<td>I only open Photoshop (maybe Lightroom sometimes)</td>
<td>Photoshop Single App Plan (~US$22.99/mo)</td>
<td>Half the price, full Photoshop, enough AI for casual use</td>
</tr>
<tr>
<td>I shoot/edit photos a lot + want 1 TB storage</td>
<td>Photography Plan 1 TB (~US$19.99/mo)</td>
<td>Cheapest way to get full Lightroom Classic + decent AI credits</td>
</tr>
<tr>
<td>I use 3+ Adobe apps regularly (Illustrator, Premiere, etc.)</td>
<td>Creative Cloud Pro (~US$59.99/mo)</td>
<td>You’re already paying for it, so keep it</td>
</tr>
<tr>
<td>I generate dozens of AI images or videos every week</td>
<td>Creative Cloud Pro</td>
<td>Only plan that never slows down and gives truly unlimited standard generations</td>
</tr>
<tr>
<td>I’m on Pro/All Apps but barely touch the other apps</td>
<td>Downgrade to Photoshop Plan</td>
<td>Save US$400+ per year. Do it the day after your billing cycle ends</td>
</tr>
<tr>
<td>Not sure / just starting out</td>
<td>Start with Photoshop Single App Plan</td>
<td>Cheapest to test. You can upgrade in 2 minutes later if you need more</td>
</tr>
</tbody>
</table>
<h4>The bottom line for Photoshop users:</h4>
<p>Most Photoshop users who subscribe to Creative Cloud Pro are overpaying for features they rarely or never use. Unless you’re heavily using AI features daily or need multiple Adobe apps, the Photoshop Single App plan will give you the exact same Photoshop experience at less than half the price.</p>
<p>If you’re a photographer who uses both Photoshop and Lightroom, the Photography Plan is an even better deal.</p>
<p>Take a moment to review your actual usage, check your AI credit balance, and make sure your subscription matches how you actually use Photoshop. Your creative budget will thank you.</p><p>The post <a href="https://www.hongkiat.com/blog/adobe-creative-cloud-plans-photoshop/">Which Creative Cloud Plan Fits Your Photoshop Use?</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74225</post-id>	</item>
		<item>
		<title>How to Use OpenClaw with DeepSeek</title>
		<link>https://www.hongkiat.com/blog/configure-deepseek-openclaw/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 13:00:03 +0000</pubDate>
				<category><![CDATA[Internet]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74235</guid>

					<description><![CDATA[<p>OpenClaw is a powerful open-source tool that transforms AI into an autonomous agent. Unlike basic chatbots that are confined to a browser tab or an app, OpenClaw allows AI to execute tasks directly on your machine, such as managing files, running terminal commands, reading calendars, and even sending messages to WhatsApp, Telegram, and Discord. While&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/configure-deepseek-openclaw/">How to Use OpenClaw with DeepSeek</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong><a rel="nofollow noopener" target="_blank" href="https://openclaw.ai">OpenClaw</a></strong> is a powerful open-source tool that transforms AI into an autonomous agent. Unlike basic chatbots that are confined to a browser tab or an app, OpenClaw allows AI to execute tasks directly on your machine, such as managing files, running terminal commands, reading calendars, and even sending messages to WhatsApp, Telegram, and Discord.</p>
<p>While OpenClaw offers seamless, out-of-the-box support for industry giants like <a rel="nofollow noopener" target="_blank" href="https://openai.com">OpenAI</a>, <a rel="nofollow noopener" target="_blank" href="https://anthropic.com">Anthropic</a>, and <a rel="nofollow noopener" target="_blank" href="https://gemini.google.com">Google Gemini</a>, unfortunately, it currently lacks native integration for <a rel="nofollow noopener" target="_blank" href="https://deepseek.com">DeepSeek</a>.</p>
<p>This can be a significant hurdle because DeepSeek provides a model, DeepSeek v3, that rivals <a rel="nofollow noopener" target="_blank" href="https://openai.com/index/gpt-5/">GPT-5</a>, but at a staggering 95% lower cost.</p>
<p>In this guide, we’ll walk you through the steps, applying a few manual configurations, so we can have the best of both sides: OpenClaw’s automation capabilities with DeepSeek’s cost-efficient intelligence.</p>
<p>Let’s get started.</p>
<hr>
<h2>Prerequisites</h2>
<p>Before we move further, make sure that you have the following ready.</p>
<ul>
<li>Node.js (v22 or higher) and NPM</li>
<li><a rel="nofollow noopener" target="_blank" href="https://api-docs.deepseek.com">DeepSeek API key</a></li>
</ul>
<p>Once you have them ready, you can proceed to install and set up OpenClaw.</p>
<hr>
<h2>Step 1: Installing OpenClaw</h2>
<p>First, fire up your terminal and then hit the following command to install OpenClaw:</p>
<pre>
npm install -g openclaw
</pre>
<p><em>Note: This installation step might take a while depending on your internet connection speed.</em></p>
<hr>
<h2>Step 2: Run Onboarding</h2>
<p>Next, we need to run the onboarding process.</p>
<pre>
openclaw onboard --install-daemon
</pre>
<p>This will install the OpenClaw daemon and guide you through selecting models, providers, and communication channels.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/configure-deepseek-openclaw/openclaw-onboard.jpg" alt="OpenClaw onboarding configuration setup screen" width="1000" height="600">
    </figure>
<p>In this example, we are going to focus on integrating DeepSeek. When prompted, use the left and right arrow keys to toggle between options, and hit <kbd>Return</kbd> to select the desired option. Here’s how you can answer the following questions:</p>
<ul>
<li>I understand this is powerful and inherently risky. Continue? <strong><em>Yes</em></strong></li>
<li>Onboarding mode. <strong><em>Quickstart</em></strong></li>
<li>Model/auth provider. <strong><em>Skip for now</em></strong></li>
<li>Filter models by provider. <strong><em>Minimax</em></strong> (Simply because it has 2 models, which makes it easier to select and cleanup later)</li>
<li>Default model. <strong><em>minimax/MiniMax-M2</em></strong></li>
<li>Select channel. <strong><em>Skip for now</em></strong></li>
<li>Configure skills now? <strong><em>No</em></strong> (We can always reconfigure it later)</li>
<li>Enable hooks? <strong><em>Skip for now</em></strong> (hit <kbd>Space</kbd> to select)</li>
<li>How do you want to hatch your bot? <strong><em>Do this later</em></strong> (Right now, it won’t work since we haven’t configured DeepSeek yet)</li>
</ul>
<p>Once you see <strong><q>Onboarding complete. Use the dashboard link above to control OpenClaw.</q></strong>, you can hit <kbd>Ctrl</kbd> + <kbd>C</kbd> to exit the onboarding process, and proceed to our next step.</p>
<hr>
<h2>Step 3: Configure DeepSeek Models</h2>
<p>We’ll need to modify the configuration file which is located in the <code>~/.openclaw/openclaw.json</code> file. We’ll begin with defining DeepSeek models by editing the <code>models</code> section in the configuration file.</p>
<p>Now, let’s add the DeepSeek models configuration to the configuration file:</p>
<pre>
{
    "models": {
        "mode": "merge",
        "providers": {
            "deepseek": {
                "baseUrl": "https://api.deepseek.com/v1",
                "apiKey": "${DEEPSEEK_API_KEY}",
                "api": "openai-completions",
                "models": [
                {
                    "id": "deepseek-chat",
                    "name": "DeepSeek Chat (v3.2)",
                    "reasoning": false,
                    "input": [
                    "text"
                    ],
                    "cost": {
                        "input": 2.8e-7,
                        "output": 4.2e-7,
                        "cacheRead": 2.8e-8,
                        "cacheWrite": 2.8e-7
                    },
                    "contextWindow": 128000,
                    "maxTokens": 8192
                },
                {
                    "id": "deepseek-reasoner",
                    "name": "DeepSeek Reasoner (v3.2)",
                    "reasoning": true,
                    "input": [
                    "text"
                    ],
                    "cost": {
                        "input": 2.8e-7,
                        "output": 4.2e-7,
                        "cacheRead": 2.8e-8,
                        "cacheWrite": 2.8e-7
                    },
                    "contextWindow": 128000,
                    "maxTokens": 65536
                }
                ]
            }
        }
    }
}
</pre>
<p>Notice that the <code>apiKey</code> is using an environment variable <code>${DEEPSEEK_API_KEY}</code>. This is to ensure that the API key is not exposed in the configuration file. <strong>Later we’ll need to set this environment variable before running OpenClaw</strong>.</p>
<hr>
<h2>Step 4: Configure Agents</h2>
<p>Next, we need to configure the <code>agents</code> section, which sets the list of models and the default model to be used by the agent.</p>
<p>In this case, we’ll modify the <code>agents</code> section to use DeepSeek models that we’ve just defined in the previous step, as follows:</p>
<pre>
{
    "agents": {
        "defaults": {
            "model": {
                "primary": "deepseek/deepseek-chat"
            },
            "models": {
                "deepseek/deepseek-chat": {},
                "deepseek/deepseek-reasoner": {}
            },
            "workspace": "~/.openclaw/workspace",
            "compaction": {
                "mode": "safeguard"
            },
            "maxConcurrent": 4,
            "subagents": {
            "maxConcurrent": 8
            }
        }
    }
}
</pre>
<hr>
<h2>Step 5: Set Environment Variable</h2>
<p>Now, we need to set the environment variable <code>DEEPSEEK_API_KEY</code> to the value of our DeepSeek API key.</p>
<h4>Linux and macOS</h4>
<p>On Linux and macOS, you can run the following command in your terminal:</p>
<pre>
export DEEPSEEK_API_KEY="your_api_key_here"
</pre>
<p>To persist the API key across sessions, you can also add the above line to your <code>~/.bashrc</code>, <code>~/.zshrc</code>, or <code>/etc/environment</code> file.</p>
<p>Don’t forget to replace <code>your_api_key_here</code> with your actual DeepSeek API key. And if you’ve just added it in one of the files, <strong>you’ll need to restart your terminal for the changes to take effect</strong>.</p>
<h4>Windows</h4>
<p>On Windows, you can set the environment variable for the current session in Command Prompt:</p>
<pre>
set DEEPSEEK_API_KEY=your_api_key_here
</pre>
<p>Or, <code>setx</code>, for example:</p>
<pre>
setx DEEPSEEK_API_KEY "your_api_key_here"
</pre>
<hr>
<h2>Step 6: Ensure DeepSeek Models are on the List</h2>
<p>Now that we’ve configured the models and set the environment variable, we can verify that the DeepSeek models are available to OpenClaw by running the following command:</p>
<pre>
openclaw models list
</pre>
<p>You should see the following output:</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/configure-deepseek-openclaw/openclaw-models-list.jpg" alt="OpenClaw models list command output" width="1000" height="600">
    </figure>
<hr>
<h2>Step 7: Hatch Your First Agent</h2>
<p>Everything is now configured. We can now hatch our first agent.</p>
<p>In this case, I’m going with TUI as the interface. So we run:</p>
<pre>
openclaw tui
</pre>
<p>This will start the TUI interface where you can chat with the agent. And we can see the model that’s currently being used is <code>deepseek/deepseek-chat</code>.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/configure-deepseek-openclaw/openclaw-tui.jpg" alt="OpenClaw TUI agent chat interface" width="1000" height="800">
    </figure>
<p>
        <strong>Note:</strong> If you have a previously active session or have previously installed OpenClaw, you need to restart the gateway before running a new TUI session. Run the following command:</p>
<pre>
openclaw gateway restart</pre>
<p>Ensure you do this <strong>before</strong> running a new TUI session. </p>
<hr>
<h2>Conclusion</h2>
<p>And there you have it! You’ve successfully bridged OpenClaw with DeepSeek.</p>
<p>Now it’s time to put your new agent to work. Whether it’s managing complex workflows, handling files, or automating tasks and messaging.</p>
<p><strong>You’re all set to do more for less.</strong></p><p>The post <a href="https://www.hongkiat.com/blog/configure-deepseek-openclaw/">How to Use OpenClaw with DeepSeek</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74235</post-id>	</item>
		<item>
		<title>How to Use the WordPress Abilities API (Register &amp; Execute)</title>
		<link>https://www.hongkiat.com/blog/wordpress-abilities-api-tutorial/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 13:00:54 +0000</pubDate>
				<category><![CDATA[WordPress]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74223</guid>

					<description><![CDATA[<p>WordPress 6.9 is shipped with a number of interesting features. Among these is a new API called Abilities API. The Abilities API provides a standardized way for WordPress core, plugins, and themes to define their capabilities in a format both humans and machines can read. In this post, we’ll explore what the Abilities API is,&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/wordpress-abilities-api-tutorial/">How to Use the WordPress Abilities API (Register &#038; Execute)</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>WordPress 6.9 is shipped with a number of interesting features. Among these is a new API called <strong><a rel="nofollow noopener" target="_blank" href="https://developer.wordpress.org/news/2025/11/introducing-the-wordpress-abilities-api/">Abilities API</a></strong>.</p>
<p><strong>The Abilities API</strong> provides a standardized way for WordPress core, plugins, and themes to define their capabilities in a format both humans and machines can read.</p>
<p>In this post, we’ll explore what the Abilities API is, why it matters, and how to use it in practice with some code examples.</p>
<p>Without further ado, let’s get started.</p>
<h2>What is Abilities API?</h2>
<p>The Abilities API is a new feature in WordPress that acts like a dictionary of everything a site can do. Before this, there was no simple or consistent way for plugins or external tools to discover a site’s available features. Functionality was often scattered across hooks, REST API endpoints, and various pieces of custom code.</p>
<p>With the Abilities API, automation tools, AI assistants, and other plugins can more easily understand how to interact with a WordPress site. Tools like AI agents, Zapier, or <a rel="nofollow noopener" target="_blank" href="https://n8n.io/">n8n</a> can simply ask WordPress, <strong>“What can you do?”</strong> and receive a structured list of abilities.</p>
<p>This API also makes cross-plugin collaboration much cleaner. Plugins can call each other’s abilities directly instead of relying on hidden hooks or fragile workarounds.</p>
<h2>Getting Started</h2>
<p>To <strong>use the Abilities API</strong>, the first step is to register a new ability. This is typically done within your plugin or theme. An “Ability” should contain:</p>
<ul>
<li>A unique name that consists of only lowercase alphanumeric characters, dashes, and forward slashes e.g. <code>hongkiatcom/create-invoice</code></li>
<li>A human-readable description and label.</li>
<li>A defined output and input schema.</li>
<li>A permission check.</li>
</ul>
<p>Here is a simple example of how we register an ability that analyzes a site and returns some metrics.</p>
<p>In this example, we call it <code>hongkiatcom/site-analytics-summary</code>. It doesn’t require any inputs and returns an object containing three metrics: visits, signups, and sales. The ability can only be executed by users with the <code>manage_options</code> capability.</p>
<pre>
add_action( 'wp_abilities_api_init', function () {
    if ( ! function_exists( 'wp_register_ability' ) ) {
        return;
    }
    
    wp_register_ability(
        'hongkiatcom/site-analytics-summary',
        [
            'label'       => __( 'Get Site Analytics Summary', 'myplugin' ),
            'description' => __( 'Returns a simple overview of site performance.', 'myplugin' ),
            'input_schema' => [
                'type'       => 'object',
                'properties' => [],
            ],
            'output_schema' => [
                'type'       => 'object',
                'properties' => [
                    'visits'  => [ 'type' => 'integer' ],
                    'signups' => [ 'type' => 'integer' ],
                    'sales'   => [ 'type' => 'integer' ],
                ],
            ],
            'permission_callback' => function () {
                return current_user_can( 'manage_options' );
            },
            'execute_callback' => function () {
                return [
                    'visits'  => 1473,
                    'signups' => 32,
                    'sales'   => 5,
                ];
            },
        ]
    );
});
</pre>
<p>Here is another example of registering an ability that processes an order. This ability takes in customer ID, product SKUs, and a payment token as input, and returns the order ID, status, and a confirmation message as output.</p>
<pre>
add_action( 'init', function() {
    if ( ! function_exists( 'wp_register_ability' ) ) {
        return;
    }

    wp_register_ability( 'hongkiatcom/process-order', [
        'description' => 'Handles payment, creates an order record, and sends a confirmation email.',
        'execute_callback'    => function () {
            // Implementation of order processing logic goes here...
        },
        'input_schema' => [
            'type'       => 'object',
            'properties' => [
                'customer_id' => [
                    'type'        => 'integer',
                    'description' => 'The ID of the customer placing the order.',
                ],
                'product_skus' => [
                    'type'        => 'array',
                    'description' => 'An array of product SKUs (strings) to be included in the order.',
                    'items'       => [ 'type' => 'string' ],
                ],
                'payment_token' => [
                    'type'        => 'string',
                    'description' => 'A secure, single-use token from the payment gateway.',
                ],
            ],
            'required' => [ 'customer_id', 'product_skus', 'payment_token' ],
        ],
        'output_schema' => [
            'type'       => 'object',
            'properties' => [
                'order_id' => [
                    'type'        => 'integer',
                    'description' => 'The ID of the newly created order post.',
                ],
                'order_status' => [
                    'type'        => 'string',
                    'description' => 'The resulting status of the order (e.g., "processing", "pending").',
                ],
                'message' => [
                    'type'        => 'string',
                    'description' => 'A confirmation message.',
                ],
            ],
        ],
    ] );
} );
</pre>
<h2>Executing Abilities</h2>
<p>Registering an ability doesn’t do much on its own. We also want to execute it and get the result.</p>
<h5>In PHP</h5>
<p>In PHP, using an ability is straightforward. WordPress provides a function <code>wp_get_ability()</code> to retrieve the ability object by name, and then you call the ability’s <code>execute()</code> method.</p>
<p>Here’s how you might execute the <code>site-analytics-summary</code> ability we registered:</p>
<pre>
$ability = wp_get_ability( 'hongkiatcom/site-analytics-summary' );

if ( $ability ) {
    $result = $ability->execute();
}
</pre>
<p>If the ability requires inputs such as in the <code>process-order</code> example, you would pass an associative array of inputs to the <code>execute()</code> method:</p>
<pre>
$ability = wp_get_ability( 'hongkiatcom/process-order' );
if ( $ability ) {
    $inputs = [
        'customer_id'  => 123,
        'product_skus' => [ 'SKU123', 'SKU456' ],
        'payment_token'=> 'tok_1A2B3C****',
    ];
    $result = $ability->execute( $inputs );
}
</pre>
<p>Notice that we didn’t have to manually handle permission checks or input validation here. If the current user didn’t have permission, or if we passed a wrong type of input, <code>execute()</code> would fail gracefully. It wouldn’t run the callback and would return an error or false. This is thanks to the schemas and permission callback we set up during registration. It makes using the ability safe and predictable.</p>
<p>Now that we’ve seen how to declare and use abilities on the back-end, let’s look at how abilities can be accessed from JavaScript, which covers front-end use cases and external integrations.</p>
<h5>In JavaScript</h5>
<p>The Abilities API supports REST API out of the box, but you need to make sure that you enable the <code>show_in_rest</code> on the Ability registration, for example:</p>
<pre>
add_action( 'wp_abilities_api_init', function () {
    if ( ! function_exists( 'wp_register_ability' ) ) {
        return;
    }
    
    wp_register_ability(
        'hongkiatcom/site-analytics-summary',
        [
            ...
            'execute_callback' => function () {
                return [
                    'visits'  => 1473,
                    'signups' => 32,
                    'sales'   => 5,
                ];
            },
            'meta' => [
                'show_in_rest' => true,
            ],
        ]
    );
});
</pre>
<p>Once that’s set, the ability becomes accessible under a special REST namespace <code>wp-abilities/v1/{ability-namespace}/{ability-name}</code>. In our example above, the full REST endpoint to execute the ability would be: <code>/wp-json/wp-abilities/v1/hongkiatcom/site-analytics-summary</code>.</p>
<p>This means external applications or your own front-end code can execute abilities by sending HTTP requests without you writing any extra REST handler code. WordPress takes care of that based on the info you provided.</p>
<p>WordPress is also introducing a JavaScript client library for the Abilities API, making it much easier to call abilities from the browser or any headless JS setup. Instead of writing manual <code>fetch()</code> calls, you can simply use built-in helper functions to list, fetch, and run abilities.</p>
<p>First, we’d need to install the library with NPM:</p>
<pre>
npm i @wordpress/abilities
</pre>
<p>Then, we can use it like this in our JavaScript application:</p>
<pre>
useEffect(() => {
   executeAbility( 'hongkiatcom/site-analytics-summary' ).then( ( response ) => {
       setResponse( response ); // Store the result in state.
   } );
}, []);
</pre>
<h2>Wrapping Up</h2>
<p>The Abilities API introduces a new way for WordPress to describe what it can do in a clear and more standardized format. This improves the interoperability of your site and allows it to work more naturally with AI assistants or automation tools.</p>
<p>In this article, we’ve just scratched the surface of what’s possible with the Abilities API. As more plugins and themes adopt it, we can expect a richer ecosystem where capabilities are easily discoverable and usable across different contexts.</p>
<p>And in the next article, we’ll see how we can integrate your WordPress site with external applications like Claude or LM Studio using the Abilities API.</p>
<p>So stay tuned!</p><p>The post <a href="https://www.hongkiat.com/blog/wordpress-abilities-api-tutorial/">How to Use the WordPress Abilities API (Register &#038; Execute)</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74223</post-id>	</item>
		<item>
		<title>A Look into Google Antigravity</title>
		<link>https://www.hongkiat.com/blog/antigravity-google-ai-developer-tool/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 13:00:51 +0000</pubDate>
				<category><![CDATA[Internet]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74221</guid>

					<description><![CDATA[<p>Writing code has come a long way. We started by writing everything from scratch by hand, then moved to frameworks, libraries and boilerplates, and now we’re working with AI tools. Google Antigravity is part of this shift. Despite the name, Antigravity won’t actually make your code afloat. But it might change how you write code.&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/antigravity-google-ai-developer-tool/">A Look into Google Antigravity</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Writing code has come a long way. We started by writing everything from scratch by hand, then moved to frameworks, libraries and boilerplates, and now we’re working with AI tools. <strong><a href="https://antigravity.google" target="_blank" rel="noopener noreferrer">Google Antigravity</a></strong> is part of this shift.</p>
<p>Despite the name, Antigravity won’t actually make your code afloat. But <strong>it might change how you write code</strong>. Rather than typing every line yourself, you can simply explain what you want to build, and an AI agent will plan the steps, write the code, and improve it for you. So you can focus on the idea instead of the implementation.</p>
<p>In this post, we’ll explore a bit more what Google Antigravity is, how it works, and why developers are starting to pay attention to it.</p>
<h2 id="installation">Installation</h2>
<p>Let’s first get started with the installation.</p>
<p>Google Antigravity is available on all major platforms – macOS, Windows, and Linux. You can download the installer from the <a href="https://antigravity.google/download" target="_blank" rel="noopener noreferrer">official Google Antigravity website</a>.</p>
<p>If you’re on macOS, you can also download it from <a href="https://formulae.brew.sh/cask/antigravity" target="_blank" rel="noopener noreferrer">the Homebrew Cask</a>.</p>
<p>After you’ve downloaded the installer, run it and follow the on-screen instructions. Once installed, launch Google Antigravity from your applications menu or desktop shortcut.</p>
<p>Upon the initial launch, you will be prompted with several questions, such as whether you’d like to import configuration from other compatible editors like VS Code and Cursor, the editor theme, and the workflow setup you’d like to run:</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-view.jpg" alt="Google Antigravity setup workflow options screen" width="1000" height="600">
    </figure>
<p>There are 4 options, as we can see above.</p>
<p>Although the recommended setup is <strong>Agent-assisted development</strong>, I prefer to be very hands-on with my code, so I selected <strong>Review-driven development</strong>. This mode lets you always review and approve the code changes suggested by the AI agent before applying any changes.</p>
<p>This setting also turns off the <strong>Terminal execution policy</strong>. I think this is a safer option, since I don’t want the AI agent to run any terminal commands without my approval either.</p>
<p>Again, you’re free to keep the default or fully customize the setup in the settings.</p>
<p>On the last step, you will be asked to log in to your Google Account.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-login.jpg" alt="Google Antigravity Google account login screen" width="1000" height="600">
    </figure>
<h2 id="first-impression">First Impression</h2>
<p>Google Antigravity is built on top of <strong>Visual Studio Code</strong>. So even though there are several tweaks that Google made, if you’ve been using Visual Studio Code, you will immediately feel at home. The interface is clean and intuitive, with a sidebar for file navigation, a main editor area, and a terminal at the bottom.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-mode.jpg" alt="Google Antigravity editor interface and sidebar" width="1000" height="600">
    </figure>
<p>On the right side, you will find the prompt input where you can interact with the AI agent. And there are two modes you can choose to interact with the AI agent: <strong>Planning mode</strong> and <strong>Fast mode</strong>.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-conversation-mode.jpg" alt="Google Antigravity prompt and conversation mode" width="1000" height="600">
    </figure>
<h4>Planning Mode</h4>
<p>I think <strong>Planning Mode</strong> is best when you’re working on something big or complex. So instead of jumping straight into code, the AI first pauses and creates a clear plan.</p>
<p>For example, if you ask it to build a WordPress plugin, it will outline steps like creating the directories, files, and outlining which WordPress functions and hooks that it’s going to use in the plugin.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-implementation-plan.jpg" alt="Google Antigravity implementation plan outline" width="1000" height="600">
    </figure>
<p>This gives you a chance to review the approach, make changes, and stay in control before any code is written.</p>
<p>Once you think it’s good you can hit the <strong>Proceed</strong> button to continue. Or, submit a review for any changes you’d like to see in the plan.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-implementation-plan-actions.jpg" alt="Google Antigravity plan proceed and actions" width="1000" height="300">
    </figure>
<p>Aside from the implementation plans that we saw here, Google may also generate task lists, screenshots, and browser recordings, depending on what you’re trying to build.</p>
<h4>Fast Mode</h4>
<p>Whereas in <strong>Fast Mode</strong>, the AI agent will <strong>proceed immediately</strong> with creating the plugin directory, files such as the PHP and readme.txt file, and add the functions and hooks without outlining the plan.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-implementation-fast.jpg" alt="Google Antigravity fast mode implementation" width="1000" height="600">
    </figure>
<p>You can still review the code and accept it if you like the output, or request changes.</p>
<h2 id="agent-manager">Agent Manager</h2>
<p>The Agent Manager is a unique feature of Google Antigravity that changes how you work as a developer. Instead of being hands-on and writing everything line by line, you become more like a project architect.</p>
<p>Through what Google calls the <strong>“Mission Control”</strong> dashboard, you will define high-level goals and delegate tasks to AI agents. These agents handle the detailed planning and execution for you, so you can focus more on direction and decisions rather than implementation.</p>
<p>To open it up, find <strong>Open Agent Manager</strong> at the top right of the window, as you can see below.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-button.jpg" alt="Google Antigravity Agent Manager button location" width="1000" height="320">
    </figure>
<p>The Agent Manager looks similar to common AI chat interfaces, where you can see the prompt input right at the center of the screen.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager.jpg" alt="Google Antigravity Agent Manager chat interface" width="1000" height="600">
    </figure>
<p>On the left side, you will find several sections:</p>
<p><strong>Inbox</strong>: This is where you can see all your ongoing and past conversations with the Agent.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-inbox.jpg" alt="Google Antigravity Agent Manager inbox" width="1000" height="320">
    </figure>
<p><strong>Workspaces</strong>: This is where you can manage different projects and apply outputs from the Agent to specific projects.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-workspace.jpg" alt="Google Antigravity Agent Manager workspaces" width="1000" height="300">
    </figure>
<p><strong>Playground</strong>: This is where you can experiment with different prompts and see how the Agent responds without affecting your main projects. If you like the output, you can easily transfer it to one of your workspaces.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-playground.jpg" alt="Google Antigravity Agent Manager playground" width="1000" height="300">
    </figure>
<h4>Starting New Conversation</h4>
<p>Let’s try starting a new conversation with the Agent. Click on the <strong>New Conversation</strong> button at the top left of the Agent Manager. Then, you can choose whether you’d like to start your new conversation in a specific Workspace, or in a Playground.</p>
<p>I’d choose to start <strong>in the Playground</strong> so I can experiment freely without affecting any projects.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-new-conversation-place.jpg" alt="Google Antigravity new conversation workspace choice" width="1000" height="450">
    </figure>
<p>In this case, I’d like to ask to create a new script to retrieve a path for a plugin in WordPress.</p>
<p>Within the Playground, Google Antigravity will create an isolated workspace directory outside the main ones. You can continue to iterate on the outputs in the Playground.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-playground-example.jpg" alt="Google Antigravity playground script example" width="1000" height="600">
    </figure>
<p>If you feel the script is ready and you’d like to move to <strong>a Workspace</strong>, you can click on the directory at the top right of the window and select the directory where your workspace resides.</p>
<figure>
        <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/antigravity-google-ai-developer-tool/google-antigravity-agent-manager-playground-move.jpg" alt="Google Antigravity move files to workspace" width="1000" height="600">
    </figure>
<p>This will move all the files created in the Playground to the selected Workspace.</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>Google Antigravity is an exciting new tool that might change how we write code. By allowing us to focus on high-level ideas and delegating the implementation details to AI agents, it has the potential to significantly speed up development workflows.</p>
<p>It’s currently in public preview and is free for individual users with very generous limits, so I encourage you to try it out and see how it fits into your development process.</p>
<p>In the next article, we’ll explore how to further customize <strong>Google Antigravity</strong> and how to secure your projects when working with AI agents.</p>
<p>Stay tuned!</p><p>The post <a href="https://www.hongkiat.com/blog/antigravity-google-ai-developer-tool/">A Look into Google Antigravity</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74221</post-id>	</item>
		<item>
		<title>5 Tailwind CSS Essential Tools to Boost Your Productivity</title>
		<link>https://www.hongkiat.com/blog/tailwind-css-productivity-tools/</link>
		
		<dc:creator><![CDATA[Thoriq Firdaus]]></dc:creator>
		<pubDate>Fri, 06 Feb 2026 13:00:42 +0000</pubDate>
				<category><![CDATA[Coding]]></category>
		<guid isPermaLink="false">https://www.hongkiat.com/blog/?p=74219</guid>

					<description><![CDATA[<p>Tailwind CSS has changed how we build websites by using utility classes like text-center or bg-blue-500 directly in HTML. But as projects get bigger, the huge number of utilities can become overwhelming, leading to long class lists, slower development, and a constant need to look up class names. So how do we keep the speed&#8230;</p>
<p>The post <a href="https://www.hongkiat.com/blog/tailwind-css-productivity-tools/">5 Tailwind CSS Essential Tools to Boost Your Productivity</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong><a rel="nofollow noopener" target="_blank" href="https://tailwindcss.com">Tailwind CSS</a></strong> has changed how we build websites by using utility classes like <code>text-center</code> or <code>bg-blue-500</code> directly in HTML.</p>
<p>But as projects get bigger, the huge number of utilities can become overwhelming, leading to long class lists, slower development, and a constant need to look up class names. So how do we keep the speed and flexibility without getting buried under the class chaos?</p>
<p>In this article, we’ll explore five essential tools that can help you stay productive while working with Tailwind CSS. These tools will help you manage your classes better, speed up your workflow, and keep your code clean and maintainable.</p>
<p>Let’s check them out!</p>
<h2>1. VSCode Extensions</h2>
<p>If you’re using a code editor like <a rel="nofollow noopener" target="_blank" href="https://code.visualstudio.com">VSCode</a>, you should install the <a rel="nofollow noopener" target="_blank" href="https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss">Tailwind CSS Intellisense</a>. This extension provides intelligent suggestions as you type, helping you quickly find the right utility classes without having to remember them all. </p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/tailwind-css-productivity-tools/tailwindcss-intellisense.jpg" alt="Tailwind CSS Intellisense VSCode extension autocomplete" width="1000" height="600">
  </figure>
<p>On top of that, the extension also offers features like linting for your Tailwind CSS code. This can significantly speed up your development process and reduce errors.</p>
<p>Another extension that I would suggest is <strong><a rel="nofollow noopener" target="_blank" href="https://marketplace.visualstudio.com/items?itemName=stivo.tailwind-fold">Tailwind Fold</a></strong>. It is another useful extension that helps you manage long class lists by allowing you to collapse and expand them. This keeps your code clean and makes it easier to navigate through your HTML files.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/tailwind-css-productivity-tools/tailwindcss-fold.jpg" alt="Tailwind Fold VSCode extension collapse class lists" width="1000" height="600">
  </figure>
<h2>2. Code Linting Tools</h2>
<p>Linting and proper formatting are essential parts of maintaining code quality, and there’s one tool that I would recommend if you’re working with Tailwind CSS: the <a rel="nofollow noopener" target="_blank" href="https://github.com/tailwindlabs/prettier-plugin-tailwindcss">prettier-plugin-tailwindcss</a>.</p>
<p><a rel="nofollow noopener" target="_blank" href="https://prettier.io/docs/plugins">This Prettier Plugin</a> is an official plugin from the Tailwind CSS team that automatically sorts your classes in a consistent order whenever you format your code.</p>
<p>On top of that it also removes duplicated classes, unnecessary whitespace, and ensures that your Tailwind CSS code is clean.</p>
<p>To install it, run:</p>
<pre>
npm install -D prettier prettier-plugin-tailwindcss
</pre>
<p>Then, update the Prettier config in your project to include <code>prettier-plugin-tailwindcss</code>.</p>
<pre>
{
  "plugins": ["prettier-plugin-tailwindcss"]
}
</pre>
<p>After you’ve configured it, I’d recommend installing the <a rel="nofollow noopener" target="_blank" href="https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode">Prettier – Code formatter</a> extension, if you’re using VSCode. This extension will allow you to format your code directly from VSCode using Prettier, as well as reporting any formatting issues.</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/tailwind-css-productivity-tools/tailwindcss-prettier-vscode.jpg" alt="Prettier Tailwind CSS VSCode format extension" width="1000" height="600">
  </figure>
<h2>3. Tailwind Merge</h2>
<p>When creating reusable components with Tailwind CSS, like a Button that would expect custom classes, you’ll likely run into class conflicts. For example, what happens if a component has a default <code>p-2</code> but someone passes <code>p-5</code> through <code>className</code>?</p>
<p>Since Tailwind utilities are atomic, both classes end up in the final output. This is exactly the messy problem that the <strong><a rel="nofollow noopener" target="_blank" href="https://www.npmjs.com/package/tailwind-merge">tailwind-merge</a></strong> package solves beautifully.</p>
<p>It is a smart little utility that cleans up conflicting classes automatically. It looks at all the styles you pass in, groups anything that conflicts, like that <code>p-2</code> vs <code>p-5</code>, and returns only the correct one—based on a simple rule: <strong>the last class wins</strong>. For example:</p>
<pre>
import { twMerge } from 'tailwind-merge';

const classes = twMerge('p-2', 'p-5');
console.log(classes); // Output: 'p-5'
</pre>
<p>This makes component libraries in React, Vue, or Svelte much easier to manage, allowing you and the users to freely override styles without worrying about unexpected results.</p>
<h2>4. Tailwind Variants</h2>
<p>Another thing that can get messy is when you start adding different styles based on props in your reusable component. For example, a button might need multiple versions such as different sizes, colors, or states like disabled or loading.</p>
<p><code>tailwind-variants</code> helps solve this problem by giving you an easy and organized way to define those style variations without all the clutter. Let’s take a look at an example below:</p>
<pre>
import { tv } from 'tailwind-variants';

const button = tv({
  // Base classes that apply regardless of variants
  base: 'font-semibold rounded-lg shadow-md transition ease-in-out duration-150',
  
  // Define the available variants
  variants: {
    // 1. Variant for color/intent
    intent: {
      primary: 'bg-blue-500 hover:bg-blue-600 text-white',
      secondary: 'bg-gray-200 hover:bg-gray-300 text-gray-800',
    },
    // 2. Variant for size
    size: {
      sm: 'py-1 px-2 text-sm',
      lg: 'py-3 px-6 text-lg',
    },
    // 3. Boolean variant for full width
    fullWidth: {
      true: 'w-full',
    },
  },
  
  // Define default values if no prop is provided
  defaultVariants: {
    intent: 'primary',
    size: 'lg',
  },
});
</pre>
<p>In this example, we set the base styles for the button, then defined variants for intent (color), size, and a boolean for full width. We also set default variants to ensure the button has a consistent look when no specific props are provided.</p>
<p>Inside your component, you call the button function with the desired props, and it automatically generates the correct class string:</p>
<pre>
// Example 1: Use defaults
const classes1 = button();
// Result: "font-semibold rounded-lg shadow-md ... bg-blue-500 hover:bg-blue-600 text-white py-3 px-6 text-lg"

// Example 2: Override defaults
const classes2 = button({ intent: 'secondary', size: 'sm' });
// Result: "font-semibold rounded-lg shadow-md ... bg-gray-200 hover:bg-gray-300 text-gray-800 py-1 px-2 text-sm"

// Example 3: Use the boolean variant
const classes3 = button({ fullWidth: true });
// Result: "font-semibold rounded-lg shadow-md ... (primary/lg styles) w-full"
</pre>
<h2>5. Tailwind Config Viewer</h2>
<p>As your Tailwind project grows, your config file can become large and difficult to navigate. With custom colors, spacing, utilities, and breakpoints, it’s easy for developers and designers to forget what’s available or where to find it. Searching through the file for a class name or color code wastes time and often leads to mistakes or hard-coded values.</p>
<p>This is where <code><a rel="nofollow noopener" target="_blank" href="https://www.npmjs.com/package/tailwind-config-viewer">tailwind-config-viewer</a></code> comes in. It’s a helpful NPM package that creates a visual style guide from your project’s <code>tailwind.config.js</code> file.</p>
<p>Instead of digging through the config file manually, you get a clean, searchable web interface that shows all your custom settings including the colors, spacing, typography, and breakpoints, right in your browser.</p>
<p>To get started, you can run the following to install it:</p>
<pre>
npm i tailwind-config-viewer -D
</pre>
<p>Then, add a custom script in your <code>package.json</code> file.</p>
<pre>
"scripts": {
  "tailwind-config-viewer": "tailwind-config-viewer -o"
}
</pre>
<p>Now, you can run <code>npm run tailwind-config-viewer</code>, and it will launch a local server displaying your Tailwind CSS configuration in a user-friendly format, as seen below:</p>
<figure>
    <img loading="lazy" decoding="async" src="https://assets.hongkiat.com/uploads/tailwind-css-productivity-tools/tailwindcss-config-viewer.jpg" alt="Tailwind config viewer style guide interface" width="1000" height="600">
  </figure>
<h2>Wrapping Up</h2>
<p>Tailwind CSS is a powerful framework, but managing its utility-first approach can get overwhelming as projects grow. The tools we’ve explored in this article can significantly boost your productivity by simplifying class management, improving code quality, and enhancing your workflow.</p>
<p>We hope that these tools help you work more efficiently with Tailwind CSS and make your development process smoother and more enjoyable.</p><p>The post <a href="https://www.hongkiat.com/blog/tailwind-css-productivity-tools/">5 Tailwind CSS Essential Tools to Boost Your Productivity</a> appeared first on <a href="https://www.hongkiat.com/blog">Hongkiat</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">74219</post-id>	</item>
	</channel>
</rss>