Optimizing a Multi-Cloud Partnership Workflow with Claude Cowork

In my previous post I mentioned experimenting with Agentic AI in my daily work. A few weeks in, I want to share specific workflow optimizations I have built using Claude Cowork – Anthropic’s desktop agent that runs inside a local Linux VM with file system access.

I will skip the basics of what Claude is and how Cowork mode works. If you are reading this, you probably know. What I want to focus on is the architecture of the system I have built around it, what worked, what broke, and what I would do differently.

The problem space

I manage cloud partnerships across several hyperscalers. Each provider has its own partner portal, account manager network, territory lists, co-sell programs, and funding mechanisms. The data lives in Excel trackers, CRM exports, call transcripts, and people’s heads. None of these systems talk to each other natively.

The core challenge: keeping a unified view of active opportunities, almost a hundred of partner contacts across providers, a dozen regional sales managers, and multiple territory account lists – while the underlying data changes after every call. This is a synchronization problem, and it compounds fast when you are onboarding new cloud partnerships on top of an existing one.

Architecture: markdown as a state layer

The first optimization was treating markdown files as a structured state layer rather than just documentation. I maintain a CLAUDE.md file that acts as an index – team structure, active deals, regional ownership, hyperscaler context, contact references, and a recent activity log. Each entity (region, account, contact, call) gets its own file in a /memory directory.

The key insight: CLAUDE.md is read by Claude at session start via the project instructions mechanism. This means every session begins with full context — no re-explaining, no “here is what we discussed last time.” The file is roughly 200 lines and includes navigation pointers to deeper files. Think of it as a routing table for context.

This scales better than stuffing everything into one massive file. When Claude needs detail on a specific territory, it reads the linked file. When it needs the glossary of partner program acronyms (MAP, ACE, WAR, MDF — every cloud provider loves abbreviations), it reads that. The central file stays lean.

I update CLAUDE.md after every significant interaction – either manually or via a scheduled task that runs at end of day.

Multi-cloud account list tracking

The most operationally complex piece is tracking account lists from different cloud providers simultaneously. Here is what actually happens in practice:

A partner account territory – typically an Excel export with 100-500 accounts. Different providers use different formats, different column names, different entity conventions. One provider exports Account Name, AWS Account ID, TTM GAR, Phase. Another gives you Company, Country, Industry, Revenue. A third sends a CSV with Cyrillic company names.

I built a Cowork skill (a markdown instruction file in .claude/skills/) that standardizes the workflow:

  1. Parse the uploaded file – handle .xlsx, .csv, detect encoding, extract account names from whatever column structure exists
  2. Normalize company names – strip legal suffixes (GmbH, B.V., LLC, CJSC, JSC), remove punctuation, lowercase
  3. Cross-reference against multiple sources: our delivery account database, the outreach tracker, previously analyzed territory lists, and CRM search results
  4. Run exact matching first, then fuzzy token matching with explicit false positive filtering (the word “bank” in isolation should not match every financial institution)
  5. Output a structured report with overlap statistics, high-value greenfield prospects, and co-sell recommendations

The skill file includes the actual Python normalization function, the list of data sources with their file paths, and a section on common pitfalls I discovered through iteration. For example: openpyxl sometimes returns integers instead of strings for cell values, so every .value call needs a str() wrapper before .strip().

Each analysis produces a markdown report saved to a consistent location (/Territory_Analysis_{AM_Name}_{Date}.md). The compound value: when a new list comes in, Claude cross-references not just against our customers, but against all previously analyzed territories.

The xlsx problem (yeah, I know I can move it to SQL DB)

Working programmatically with multiple Excels is a pain. I wrote dedicated Python update scripts (update_contacts.py, update_tracker.py) that accept JSON instructions and handle all the formatting preservation logic. These scripts live in the skill directories. Now when Claude needs to update the contacts database, it generates a JSON payload and calls the script, rather than writing raw openpyxl code each time.

This is a pattern worth generalizing: for any task where Claude produces inconsistent code-level output, extract the stable logic into a script and reduce Claude’s job to generating structured input. Treat it as a function call boundary – AI generates the parameters, deterministic code handles the execution.

Scheduled tasks as background infrastructure

Cowork supports cron-style scheduled tasks. I run six:

A daily memory sync at end of day – it reviews session transcripts and updates CLAUDE.md, the contacts file, and the outreach tracker with anything that changed. A weekly reconciliation on Friday mornings – it scans all call notes from the week and checks for gaps in the tracker or contacts database. A stale task detector twice a week – it reads TASKS.md and flags anything overdue or unassigned.

The remaining three handle workspace-specific syncs for different workstreams.

The important optimization here: these tasks do not just run scripts. They are full Claude sessions with their own prompts. The daily memory sync, for example, reads the day’s session activity, decides what changed, and writes targeted updates. This means the memory layer stays current even if I forget to update it manually during a busy day.

Context window management

This is the biggest practical constraint. After processing 3-4 call transcripts and running a territory analysis in a single session, you are approaching context limits. Claude starts losing track of what it updated earlier in the conversation.

My workaround: when a session hits the limit, it generates a structured summary — what was done, what files were modified, what is pending. The next session picks up from this summary. It works, but there is information loss. The compressed summary captures decisions and outcomes, but not the nuance of why certain choices were made.

A better pattern I am experimenting with: keeping sessions shorter and more focused (one task type per session), and relying on the file system as the persistence layer rather than in-session memory. If every update is immediately written to disk, the context window only needs to hold the current task, not the session history.

What I would optimize next

Structured logging. Right now, changes to the memory layer are implicit – embedded in file updates. I want to add an explicit changelog that tracks what changed, when, and why. This would make the weekly reconciliation smarter and help debug inconsistencies.

Skill versioning. Skills evolve as I discover edge cases. I have no version history – just the current file. Adding git-style versioning to skill files would let me track what changed and roll back if a skill update introduces regressions.

MCP integrations. Cowork supports connecting to external tools via Model Context Protocol – CRM, email, Slack, calendar. Direct integrations would eliminate the manual data transfer and open up real-time pipeline sync.

The takeaway

The highest-leverage optimization is not teaching Claude to do a specific task. It is building the infrastructure – memory files, skills, update scripts, scheduled syncs – that makes every task faster and more consistent. The upfront investment in structuring your workspace pays compound returns.

Three weeks in, Claude processes my call notes, maintains my contact database, analyzes territory lists across providers, tracks tasks with ownership, and keeps everything synchronized.

The experiment continues.

How Generative AI Is Changing The Future Of Work

Report Review: How Generative AI Is Changing The Future Of Work

OliverWyman Forum consultants released 100-page report for Senior and C-level management about how GenAI transforms the modern workforce, society and consumers’ habits. And it is a very exciting reading!

The authors interviewed 15,000 respondents in 16 countries during October-November 2023 and presented the report at Davos 2024 conference. The report provides both high-level picture of GenAI impacts but also go into details into regional, industry and generational differences for the current state of GenAI adoption and potential future.

Apart from the obvious things, like the speed of adoption (17 years it took for Internet to acquire the same amount of users that ChatGPT did in 10 months) and overall optimism of people about AI (96% of respondents of say generative AI can benefit their jobs), there are very interesting insights that were interesting for me to read.

Mass adoption ≠ mass productivity

The dramatic uptake in generative AI has been useful for many but hasn’t yet resulted in significant productivity gains across the board. Why?

Authors suggest that we may face what is called the productivity paradox (also the Solow computer paradox). It is the peculiar observation made in business process analysis that, as more investment is made in information technology, worker productivity may go down instead of up. This observation has been firmly supported with empirical evidence from the 1970s to the early 1990s.

Workforce pyramid disruption

While it is clear by now that entry-level jobs will be affected the most with GenAI adoption, there will be an another effect as well. As generative AI replaces some front-line roles, it will disrupt the pipeline of manager roles, akin to the “collapse of the middle” in the job pyramid. To simplify, the role of a front-line manager/supervisor may become obsolete as this role could be replaced with former “junior” employees.

Disconnect between employees and employers priorities

There is a clear difference between what employees and employers see what is important to learn today. My opinion on this discrepancy is following – AI adoption will happen as part of companies strategy, so employees will learn it how to use AI anyway. It will be just part of roles descriptions. But analytical and creative thinking is way harder to master than prompt engineering.


HBR's Must Reads

Book Review: HBRs 10 Must Reads on AI, Analytics and the New Machine Age

Only after I pressed “purchase” button, I have realized that this book has been published back in 2018. Feels like ages ago. But it was really interesting to see what authors’ predictions came to reality and what are still just concepts. Many new cool technologies from 2018 did not cross the chasm.

This was the first HBR’s Must Reads book I finished in my life and there are several pros and cons with such format. First of all, different authors have different writing styles and it is a bit confusing for a reader, especially, because quality of writing varies as well. Second, writers are seemed to be aware of very short attention span of readers and try to pack as many ideas as possible and trim the text. It works, but it leaves a reader with a lot of unpacked thoughts.

Out of ten articles the most interesting ones were “Marketing in the Age of Alexa” , “Collaborative Intelligence” and “When Your Boss Wears Metal Pants”. All these articles provide analysis how humans will collaborate with AI in various workplaces and businesses. While we don’t see such collaboration in full yet, there are clear indications of this trend. The breakthrough will happen at that moment, when AI will be considered part of the team and will be participating in tasks assignments on par with humans. The article “When Your Boss Wears Metal Pants” gives overview of some experiments and researches how humans will behave in such situations. The results were quite surprising for me.

There are a lot of predictions that did not become a reality. For example, we don’t see massive usage of commercial drones, while there is a clear use case for them as a weapon. Marketing organizations did not change their objects from humans to AI Assistances and, overall, AI Assistances are very far from the point where we can delegate to them complex tasks and rely on their decisions to make purchases on humans behalf.

Several articles touched how business is and will use AR and AI/ML (like, Stitch Fix) and it is funny to realize that from a consumer perspective you may not know that a product or a service value was generated for you by Artificial Intelligence. As a technologist, I am aware that infusion of AI/ML capabilities into apps and tech products happening on a massive scale, but it is rarely visible for an end user. So, those predictions from 2018 became true 100%.

Overall, this book is 4 out 5. It has several interesting ideas, but they are worth only if you want to learn what was a trail of thoughts in 2018.

Check out my other book reviews!


State of Phygital

State of Phygital Report: thoughts and insights

During latest Christmas holidays I read State of Phygital Report that covers definition of “phygital”, use-cases and analysis of impact on existing industries and verticals. Below are some my thoughts around it.

I guess, many of us wait for more AR/VR features in our smartphones and consumer electronics. In fact, there are already dozens of features and apps are available today. So, what is the next step? The authors of the report believe that is is a Phygital revolution.

We see Phygital as the philosophy of a new world order, where Phygital essentially enables the close integration of the virtual environment (digital) into real human life (physical).

State of Phygital report, 2021
Continue reading

State of AI Report: thoughts and insights

I have recently came across State of AI report and want to share some of my thoughts around it.

First of all, the report has a lot of data but most interesting for me was Industry part that talks about companies and their products in AI area. I believe it is obvious by now that almost every industry is or will be affected by infusion of AI/ML features into products, workflows and processes.

Some notable examples from the report:

  • Use of AI-based microscopy to find most effective cancer drug to improve survival
  • UK National Grid Electricity System Operator has implemented new electricity consumption forecast system that more than doubled precision of forecasting
  • More than 300 different apps are using OpenAI GPT-3 integrations that currently generate an average of 4.5 billion words per day

And it is not surprising. Almost every industry has to deal with capacity planning, future prediction and forecasting – areas where AI is superior than humans.

Among other apps that are using GPT-3 integrations is Github Copilot which is basically converts comments to the code, can create functions and suggests unit tests. How fast such systems will replace Software Engineers?

And it is not a rhetorical question – in a world-first, South Africa granted a patent to an AI system. The system, called Dabus, invented a method to better interlock food containers. Most countries, however, do not recognize a machine as an inventor.

The patent application was submitted to patent offices in the US, the EU, Australia and South Africa. It was rejected in the US and the EU, and a particular ruling on this patent is still in waiting in Australia. In the US, a judge ruled that only a human can hold a patent, not a machine. This is because according to American law, “a natural person” needs to take an oath that they are the inventor. A contradictory ruling came out in Australia, which stated that an AI can be named as an inventor in a patent application.

Now the question is will we have enough of Critical Raw Materials to meet demand of High Perfromance Computing systems for training and running AI/ML models.

Countries accounting for largest share of EU supply of CRMs

largest_share_of_eu_supply_of_crms.png