In my previous post I mentioned experimenting with Agentic AI in my daily work. A few weeks in, I want to share specific workflow optimizations I have built using Claude Cowork – Anthropic’s desktop agent that runs inside a local Linux VM with file system access.
I will skip the basics of what Claude is and how Cowork mode works. If you are reading this, you probably know. What I want to focus on is the architecture of the system I have built around it, what worked, what broke, and what I would do differently.
The problem space
I manage cloud partnerships across several hyperscalers. Each provider has its own partner portal, account manager network, territory lists, co-sell programs, and funding mechanisms. The data lives in Excel trackers, CRM exports, call transcripts, and people’s heads. None of these systems talk to each other natively.
The core challenge: keeping a unified view of active opportunities, almost a hundred of partner contacts across providers, a dozen regional sales managers, and multiple territory account lists – while the underlying data changes after every call. This is a synchronization problem, and it compounds fast when you are onboarding new cloud partnerships on top of an existing one.
Architecture: markdown as a state layer
The first optimization was treating markdown files as a structured state layer rather than just documentation. I maintain a CLAUDE.md file that acts as an index – team structure, active deals, regional ownership, hyperscaler context, contact references, and a recent activity log. Each entity (region, account, contact, call) gets its own file in a /memory directory.
The key insight: CLAUDE.md is read by Claude at session start via the project instructions mechanism. This means every session begins with full context — no re-explaining, no “here is what we discussed last time.” The file is roughly 200 lines and includes navigation pointers to deeper files. Think of it as a routing table for context.
This scales better than stuffing everything into one massive file. When Claude needs detail on a specific territory, it reads the linked file. When it needs the glossary of partner program acronyms (MAP, ACE, WAR, MDF — every cloud provider loves abbreviations), it reads that. The central file stays lean.
I update CLAUDE.md after every significant interaction – either manually or via a scheduled task that runs at end of day.
Multi-cloud account list tracking
The most operationally complex piece is tracking account lists from different cloud providers simultaneously. Here is what actually happens in practice:
A partner account territory – typically an Excel export with 100-500 accounts. Different providers use different formats, different column names, different entity conventions. One provider exports Account Name, AWS Account ID, TTM GAR, Phase. Another gives you Company, Country, Industry, Revenue. A third sends a CSV with Cyrillic company names.
I built a Cowork skill (a markdown instruction file in .claude/skills/) that standardizes the workflow:
- Parse the uploaded file – handle
.xlsx,.csv, detect encoding, extract account names from whatever column structure exists - Normalize company names – strip legal suffixes (
GmbH,B.V.,LLC,CJSC,JSC), remove punctuation, lowercase - Cross-reference against multiple sources: our delivery account database, the outreach tracker, previously analyzed territory lists, and CRM search results
- Run exact matching first, then fuzzy token matching with explicit false positive filtering (the word “bank” in isolation should not match every financial institution)
- Output a structured report with overlap statistics, high-value greenfield prospects, and co-sell recommendations
The skill file includes the actual Python normalization function, the list of data sources with their file paths, and a section on common pitfalls I discovered through iteration. For example: openpyxl sometimes returns integers instead of strings for cell values, so every .value call needs a str() wrapper before .strip().
Each analysis produces a markdown report saved to a consistent location (/Territory_Analysis_{AM_Name}_{Date}.md). The compound value: when a new list comes in, Claude cross-references not just against our customers, but against all previously analyzed territories.
The xlsx problem (yeah, I know I can move it to SQL DB)
Working programmatically with multiple Excels is a pain. I wrote dedicated Python update scripts (update_contacts.py, update_tracker.py) that accept JSON instructions and handle all the formatting preservation logic. These scripts live in the skill directories. Now when Claude needs to update the contacts database, it generates a JSON payload and calls the script, rather than writing raw openpyxl code each time.
This is a pattern worth generalizing: for any task where Claude produces inconsistent code-level output, extract the stable logic into a script and reduce Claude’s job to generating structured input. Treat it as a function call boundary – AI generates the parameters, deterministic code handles the execution.
Scheduled tasks as background infrastructure
Cowork supports cron-style scheduled tasks. I run six:
A daily memory sync at end of day – it reviews session transcripts and updates CLAUDE.md, the contacts file, and the outreach tracker with anything that changed. A weekly reconciliation on Friday mornings – it scans all call notes from the week and checks for gaps in the tracker or contacts database. A stale task detector twice a week – it reads TASKS.md and flags anything overdue or unassigned.
The remaining three handle workspace-specific syncs for different workstreams.
The important optimization here: these tasks do not just run scripts. They are full Claude sessions with their own prompts. The daily memory sync, for example, reads the day’s session activity, decides what changed, and writes targeted updates. This means the memory layer stays current even if I forget to update it manually during a busy day.
Context window management
This is the biggest practical constraint. After processing 3-4 call transcripts and running a territory analysis in a single session, you are approaching context limits. Claude starts losing track of what it updated earlier in the conversation.
My workaround: when a session hits the limit, it generates a structured summary — what was done, what files were modified, what is pending. The next session picks up from this summary. It works, but there is information loss. The compressed summary captures decisions and outcomes, but not the nuance of why certain choices were made.
A better pattern I am experimenting with: keeping sessions shorter and more focused (one task type per session), and relying on the file system as the persistence layer rather than in-session memory. If every update is immediately written to disk, the context window only needs to hold the current task, not the session history.
What I would optimize next
Structured logging. Right now, changes to the memory layer are implicit – embedded in file updates. I want to add an explicit changelog that tracks what changed, when, and why. This would make the weekly reconciliation smarter and help debug inconsistencies.
Skill versioning. Skills evolve as I discover edge cases. I have no version history – just the current file. Adding git-style versioning to skill files would let me track what changed and roll back if a skill update introduces regressions.
MCP integrations. Cowork supports connecting to external tools via Model Context Protocol – CRM, email, Slack, calendar. Direct integrations would eliminate the manual data transfer and open up real-time pipeline sync.
The takeaway
The highest-leverage optimization is not teaching Claude to do a specific task. It is building the infrastructure – memory files, skills, update scripts, scheduled syncs – that makes every task faster and more consistent. The upfront investment in structuring your workspace pays compound returns.
Three weeks in, Claude processes my call notes, maintains my contact database, analyzes territory lists across providers, tracks tasks with ownership, and keeps everything synchronized.
The experiment continues.






