本文字数:约 8000 字,预计阅读时间:12 分钟
Anthropic's Claude Code can now read your Slack messages and write code for you
Anthropic on Monday launched a beta integration that connects its fast-growing Claude Code programming agent directly to Slack, allowing software engineers to delegate coding tasks without leaving the workplace messaging platform where much of their daily communication already happens.
The release, which Anthropic describes as a "research preview," is the AI safety company's latest move to embed its technology deeper into enterprise workflows — and comes as Claude Code has emerged as a surprise revenue engine, generating over $1 billion in annualized revenue just six months after its public debut in May.
The mechanics are deceptively simple but address a persistent friction point in software development: the gap between where problems get discussed and where they get fixed. When a user mentions @Claude in a Slack channel or thread, Claude analyzes the message to determine whether it constitutes a coding task. If it does, the system automatically creates a new Claude Code session. Users can also explicitly instruct Claude to treat requests as coding tasks.
Claude gathers context from recent channel and thread messages in Slack to feed into the Claude Code session. It will use this context to automatically choose which repository to run the task on based on the repositories you've authenticated to Claude Code on the web.
The feature builds on Anthropic's existing Claude for Slack integration and requires users to have access to Claude Code on the web. In practical terms, a product manager reporting a bug in Slack could tag Claude, which would then analyze the conversation context, identify the relevant code repository, investigate the issue, propose a fix, and post a pull request—all while updating the original Slack thread with its progress.
The Slack integration extends Anthropic's bet, positioning Claude Code as an ambient presence in the workspaces where engineering decisions actually get made. According to an Anthropic spokesperson, companies including Rakuten, Novo Nordisk, Uber, Snowflake, and Ramp now use Claude Code for both professional and novice developers.
Rakuten, the Japanese e-commerce giant, has reportedly reduced software development timelines from 24 days to just 5 days using the tool—a 79% reduction that illustrates the productivity claims Anthropic has been making.
The Slack launch is the latest in a rapid series of Claude Code expansions. In late November, Claude Code was added to Anthropic's desktop apps including the Mac version. Claude Code was previously limited to mobile apps and the web.
Anthropic has also invested heavily in the developer infrastructure that powers Claude Code. In late November, Anthropic released three new beta features for tool use: Tool Search Tool, which allows Claude to use search tools to access thousands of tools without consuming its context window; Programmatic Tool Calling, which allows Claude to invoke tools in a code execution environment reducing the impact on the model's context window; and Tool Use Examples, which provides a universal standard for demonstrating how to effectively use a given tool.
The Model Context Protocol (MCP) is an open standard for connecting AI agents to external systems. Connecting agents to tools and data traditionally requires a custom integration for each pairing, creating fragmentation and duplicated effort that makes it difficult to scale truly connected systems. MCP provides a universal protocol—developers implement MCP once in their agent and it unlocks an entire ecosystem of integrations.
Booking.com’s agent strategy: Disciplined, modular and already delivering 2× accuracy
When many enterprises weren’t even thinking about agentic behaviors or infrastructures, Booking.com had already “stumbled” into them with its homegrown conversational recommendation system.
This early experimentation has allowed the company to take a step back and avoid getting swept up in the frantic AI agent hype. Instead, it is taking a disciplined, layered, modular approach to model development: small, travel-specific models for cheap, fast inference; larger large language models (LLMs) for reasoning and understanding; and domain-tuned evaluations built in-house when precision is critical.
With this hybrid strategy—combined with selective collaboration with OpenAI—Booking.com has seen accuracy double across key retrieval, ranking, and customer-interaction tasks.
Booking.com's initial pre-gen AI tooling for intent and topic detection was a small language model, what Pathak described as "the scale and size of BERT." The model ingested the customer’s inputs around their problem to determine whether it could be solved through self-service or bumped to a human agent.
“We started with an architecture of ‘you have to call a tool if this is the intent you detect and this is how you've parsed the structure,” Pathak explained. “That was very, very similar to the first few agentic architectures that came out in terms of reason and defining a tool call.”
His team has since built out that architecture to include an LLM orchestrator that classifies queries, triggers retrieval-augmented generation (RAG) and calls APIs or smaller, specialized language models. “We've been able to scale that system quite well because it was so close in architecture that, with a few tweaks, we now have a full agentic stack,” said Pathak.
As a result, Booking.com is seeing a 2X increase in topic detection, which in turn is freeing up human agents’ bandwidth by 1.5 to 1.7X. More topics, even complicated ones previously identified as ‘other’ and requiring escalation, are being automated.
Ultimately, this supports more self-service, freeing human agents to focus on customers with uniquely-specific problems that the platform doesn’t have a dedicated tool flow for—say, a family that is unable to access its hotel room at 2 a.m. when the front desk is closed.
That not only “really starts to compound,” but has a direct, long-term impact on customer retention, Pathak noted. “One of the things we've seen is, the better we are at customer service, the more loyal our customers are.”
Another recent rollout is personalized filtering. Booking.com has between 200 and 250 search filters on its website—an unrealistic amount for any human to sift through, Pathak pointed out. So, his team introduced a free text box that users can type into to immediately receive tailored filters.
“That becomes such an important cue for personalization in terms of what you're looking for in your own words rather than a clickstream,” said Pathak.
In turn, it cues Booking.com into what customers actually want. For instance, hot tubs—when filter personalization first rolled out, jacuzzis were one of the most popular requests. That wasn’t even a consideration previously; there wasn’t even a filter. Now that filter is live.
“I had no idea,” Pathak noted. “I had never searched for a hot tub in my room honestly.”
When it comes to personalization, though, there is a fine line; memory remains complicated, Pathak emphasized. While it’s important to have long-term memories and evolving threads with customers—retaining information like their typical budgets, preferred hotel star ratings, or whether they need disability access—it must be on their terms and protective of their privacy.
Booking.com is extremely mindful with memory, seeking consent so as to not be “creepy” when collecting customer information.
“Managing memory is much harder than actually building memory,” said Pathak. “The tech is out there, we have the technical chops to build it. We want to make sure we don't launch a memory object that doesn't respect customer consent, that doesn't feel very natural.”
看完最新国产AI写的公众号文章,我慌了!
智谱最新升级的GLM-4.6V
国产AI在持续进化,智谱推出的GLM-4.6V版本展示了显著的进步。此次升级不仅提升了语言理解与生成的能力,还在特定场景下的内容创作上展现了出色的表现。通过对最新国产AI文章的分析,可以看出其在信息提取、逻辑推理以及情感表达方面达到了新的高度。
特别是在公众号文章的创作中,AI不仅能够撰写高质量的文章内容,还能够根据目标受众进行个性化调整。这种能力不仅帮助企业提高了内容生产效率,也为用户提供了更加丰富多样的阅读体验。随着国产AI技术的不断突破,未来在更多应用场景中的表现值得期待。
Design in the age of AI: How small businesses are building big brands faster
The rise of generative AI has shifted how small businesses imagine, launch, and grow — turning what used to be a months-long creative process into something interactive, iterative, and accessible from day one.
Large language models and image generators now act as collaborative partners — sparking ideas, testing directions, and handling tedious layout and copy work. For founders, that means fewer barriers and faster iteration.
Instead of hiring separate agencies for naming, logo design, and web development, small businesses are turning to unified AI platforms that handle the full early-stage design stack. Tools like Design.com merge naming, logo creation, and website generation into a single workflow — turning an entrepreneur’s first sketch into a polished brand system within minutes.
AI tools are giving small businesses superpowers, removing friction from creativity, and redefining what it means to be a designer in the age of AI.
罗永浩的十字路口:播客、年轻人和 AI 浪潮
罗永浩的十字路口播客成功地将他过去的失败转化为新项目的灵感来源。他将更多精力投入播客内容的制作,这不仅让他重新定义了自我价值,还让他重新审视了自己在科技行业中的角色。
通过采访不同领域的精英,罗永浩不仅获得了新的知识和灵感,还为未来的创新项目积累了宝贵的资源。他强调了年轻一代创业者的优势,同时也表达了对AI浪潮下创业机会的兴奋与压力。
给机器人打造动力底座,微悍动力发布三款高功率密度关节模组
微悍动力近日发布了三款高功率密度关节模组,为机器人提供了强大的动力支持。这些模组覆盖从轻量精密到重型作业的全场景动力解决方案,进一步推动了机器人技术的发展与应用。
赵何娟:当杰弗里·辛顿告诉我,他后悔了|2025 T-EDGE全球对话
赵何娟在T-EDGE全球对话中分享了与深度学习之父杰弗里·辛顿的对话内容。辛顿表达了他的一些遗憾,但强调这不是对深度学习技术本身的后悔。这次对话揭示了AI领域的一些内部反思和未来发展方向。
薪酬挂钩业绩!监管组合拳破解“基金盈利基民不赚钱”困局
近日,《基金管理公司绩效考核管理指引(征求意见稿)》正式下发,要求基金经理在业绩低于基准10个百分点且利润率为负时,绩效薪酬降幅不得低于30%。这一变革旨在推动行业回归投资者利益优先的本源。
薪酬挂钩业绩!监管组合拳破解“基金盈利基民不赚钱”困局
近日,《基金管理公司绩效考核管理指引(征求意见稿)》正式下发,标志着公募基金行业迎来一场触及核心利益的薪酬体系重构。新规要求基金经理在业绩低于基准10个百分点且利润率为负时,绩效薪酬降幅不得低于30%。这一变革将首次对高薪基金经理施加直接的降薪压力,推动行业回归投资者利益优先的本源。
总结
今日AI领域的新闻报道聚焦于大型语言模型(LLM)和AI助手在企业中的应用,以及它们如何改变工作流程和提升生产力。Anthropic的Claude Code通过与Slack的集成,展现了AI助手在软件开发中的潜力;Booking.com通过其有策略的AI代理布局,证明了AI在客户服务和个性化推荐上的价值。同时,罗永浩的播客项目也展示了AI与人互动的新方式,以及其背后对年轻一代创业者的影响。这些案例表明,AI技术正在逐步渗透到各个行业中,改变着企业和个人的工作方式。
作者:Qwen/Qwen2.5-32B-Instruct
文章来源:量子位, 极客公园, 钛媒体, VentureBeat
编辑:小康