AI-Powered Automation & Content Creation for Businesses
Helping businesses leverage AI, automation, and integrations to streamline workflows and supercharge content creation.
The future of business is AI-driven. I specialize in creating AI-powered solutions that automate processes, integrate seamlessly with your existing tools, and generate content effortlessly. Whether it's WhatsApp and Telegram automation, AI voice agents, or AI-generated videos and images, I help businesses stay ahead of the curve. Let's explore how AI can work for you.

About Me
With over 25 years of experience in IT consulting and over 15 years in photography and videography, I've always been at the forefront of technology and creativity. My journey from visual storytelling to AI innovation has given me a unique perspective on how automation, AI integrations, and content generation can revolutionize businesses.
I now focus on:
- •Developing AI-powered mobile apps
- •Automating workflows with WhatsApp, Telegram, and CRM integrations
- •Creating AI-generated content for businesses, including video and image automation
- •Leveraging local LLMs for secure and powerful AI solutions
Businesses today need to embrace AI to stay competitive. Let's connect and explore how AI can transform your operations.
Services
AI-Powered Mobile Apps
Custom-built AI applications that streamline operations, enhance efficiency, and provide innovative solutions tailored to your business needs.
Automations & Integrations
Seamlessly integrate AI into your business operations with WhatsApp, Telegram, email marketing, and CRM automation.
Voice AI Agents
Enhance customer interactions with AI-driven voice agents, providing automated responses and intelligent customer support.
Local LLM Solutions
AI chatbots and tools that run locally, ensuring privacy, security, and speed for businesses needing on-premise AI.
AI-Powered Content Generation
Revolutionize social media and marketing with AI-generated videos, images, and automated content creation.
Past Work Experience
While I've built a strong foundation in photography and videography over the past 15 years, I've now refocused my expertise on AI solutions and mobile development to help businesses innovate and grow.
Psssst…Did you know this website was built with AI?
Not only that
It also scores a perfect 100% on Google PageSpeed Insights for both mobile and desktop.
Why is that important?
Because it means the site loads lightning-fast, works flawlessly on any device, and delivers a smooth experience for every visitor. In other words, no waiting, no glitches—just instant access to what matters. That’s the power of combining smart design with AI precision.

Latest AI News

How Auto Research Can Make Claude Code Skills Improve Themselves
Mar 14, 2026
Claude Code Skills are powerful, but anyone who has used them for a while knows they are not always perfectly reliable. Some runs produce exactly what you want. Others feel completely off. A new idea is starting to change that. By combining Claude Code Skills with Auto Research techniques, developers can turn skills into systems that gradually improve themselves through repeated testing and evaluation. <br><br> <ul> <li><a href="#why-skills-struggle">Why Claude Code Skills sometimes struggle</a></li> <li><a href="#auto-research">What Auto Research actually means</a></li> <li><a href="#karpathy">Why Andrej Karpathy’s idea matters here</a></li> <li><a href="#self-improving-skills">How Claude Code Skills can improve themselves</a></li> <li><a href="#metrics">Why metrics are the key to better skills</a></li> <li><a href="#evaluation">Building a simple evaluation system</a></li> <li><a href="#optimization">Turning prompt iteration into optimization</a></li> <li><a href="#broader-impact">Why this idea matters beyond Claude Code</a></li> </ul> <h2 id="why-skills-struggle">Why Claude Code Skills sometimes struggle</h2> <p>Claude Code Skills allow developers to package instructions and workflows into reusable tools that the model can execute. They are extremely useful for automating tasks inside development environments.</p> <p>However, many users notice something quickly. Skills are powerful but not always perfectly consistent.</p> <p>A typical experience looks like this:</p> <ul> <li>Most runs produce good results</li> <li>Some runs produce confusing or incomplete output</li> </ul> <p>This does not mean the skill is broken. It simply reflects the probabilistic nature of language models. Slight differences in context or interpretation can lead to different outputs.</p> <p>The challenge is improving consistency without manually rewriting instructions again and again.</p> <br><br> <h2 id="auto-research">What Auto Research actually means</h2> <p>Auto Research is an approach where agents repeatedly test variations of a process and evaluate the results in order to improve performance over time.</p> <p>Instead of relying on intuition or manual tuning, the system experiments automatically. Each iteration generates outputs, evaluates them, adjusts parameters, and tries again.</p> <p>The cycle looks like this:</p> <ul> <li>Run the skill</li> <li>Evaluate the output</li> <li>Adjust the prompt or instructions</li> <li>Run the skill again</li> <li>Keep the best performing version</li> </ul> <p>Over time the skill becomes more reliable because the system learns which instructions consistently produce better outcomes.</p> <p>This turns prompt engineering into a measurable optimization process rather than a guessing game.</p> <br><br> <h2 id="karpathy">Why Andrej Karpathy’s idea matters here</h2> <p>The Auto Research concept gained attention after being shared by Andrej Karpathy.</p> <p>Karpathy is widely known in the AI world. He was one of the early members of OpenAI and later served as Director of AI at Tesla. His work in deep learning and neural networks has influenced many modern AI development practices.</p> <p>The original experiment focused on improving machine learning pipelines through autonomous experimentation.</p> <p>What makes the idea exciting is that the same principle applies extremely well to AI workflows such as Claude Code Skills.</p> <p>If a system can measure whether an output is good or bad, it can attempt to improve the instructions that produced that output.</p> <br><br> <h2 id="self-improving-skills">How Claude Code Skills can improve themselves</h2> <p>When combined with an evaluation framework, Claude Code Skills can evolve through repeated testing.</p> <p>A simple improvement loop might work like this:</p> <ul> <li>The skill generates several outputs for the same task</li> <li>An evaluation system scores each output</li> <li>The system modifies the skill instructions</li> <li>The updated skill runs again</li> <li>The best performing configuration is stored</li> </ul> <p>This process gradually identifies instructions that produce more reliable results.</p> <p>Instead of manually guessing how to improve the prompt, the system discovers improvements through structured experimentation.</p> <br><br> <h2 id="metrics">Why metrics are the key to better skills</h2> <p>The most important requirement for Auto Research is an objective metric.</p> <p>The system needs a clear way to measure whether a result is better or worse.</p> <p>For Claude Code Skills this might include:</p> <ul> <li>Evaluation pass rate</li> <li>Task completion accuracy</li> <li>Formatting correctness</li> <li>Compliance with defined rules</li> </ul> <p>Without a metric the system cannot improve itself because it has no signal telling it what success looks like.</p> <p>Once a metric exists, however, the system can compare different prompt variants and gradually move toward higher scores.</p> <br><br> <h2 id="evaluation">Building a simple evaluation system</h2> <p>An evaluation system does not need to be complex.</p> <p>A basic setup might generate several outputs for a given prompt and evaluate them against a checklist.</p> <p>For example:</p> <ul> <li>Did the output follow the correct format</li> <li>Did it include the required information</li> <li>Was the reasoning correct</li> <li>Did it satisfy the task constraints</li> </ul> <p>If each criterion produces a score, the system can combine those scores into a total result.</p> <p>That score then becomes the signal used to determine whether a new prompt version performs better or worse.</p> <br><br> <h2 id="optimization">Turning prompt iteration into optimization</h2> <p>Once a scoring system exists, the improvement loop becomes surprisingly powerful.</p> <p>Imagine a setup where:</p> <ul> <li>Ten outputs are generated per run</li> <li>Each output is evaluated on four criteria</li> </ul> <p>This creates a maximum score that the system can aim to improve.</p> <p>Over multiple iterations the optimization process may gradually move the average score upward.</p> <p>The goal is not only higher quality output but also greater consistency.</p> <p>Consistency is often the missing piece when turning AI prototypes into reliable tools.</p> <br><br> <h2 id="broader-impact">Why this idea matters beyond Claude Code</h2> <p>The Auto Research concept is not limited to development tools.</p> <p>Any workflow that produces measurable results can potentially benefit from the same optimization loop.</p> <p>Examples include:</p> <ul> <li>Improving website performance experiments</li> <li>Testing different marketing messages</li> <li>Optimizing landing pages</li> <li>Refining prompts used by AI agents</li> <li>Stabilizing creative generation workflows</li> </ul> <p>The key insight is simple.</p> <p>If something can be measured, it can often be improved through automated experimentation.</p> <p>For AI systems this creates an important shift. The value is no longer only in the prompt or the model itself, but also in the history of experiments and improvements that produced the best results.</p> <p>Over time that improvement data becomes one of the most valuable assets in an AI workflow.</p>

Why Google’s gws CLI Matters for AI Agents, Automation, and Workspace Workflows
Mar 10, 2026
Google’s new gws CLI is one of those developer tools that looks small at first and then slowly reveals how important it could become. On the surface, it is “just” a command line interface for Google Workspace. In practice, it points toward something much bigger: a future where AI agents can work with Gmail, Drive, Calendar, Docs, Sheets, and more through one consistent interface instead of a patchwork of separate integrations. For developers, automation builders, and teams working on agentic workflows, that is a meaningful shift. <br><br> <ul> <li><a href="#what-is-gws">What gws actually is</a></li> <li><a href="#why-it-matters">Why this matters for AI agents and automation</a></li> <li><a href="#one-interface">One interface instead of many APIs</a></li> <li><a href="#dynamic">Why the dynamic discovery model matters</a></li> <li><a href="#agent-actions">What agents can realistically do with it</a></li> <li><a href="#team-workflows">Concrete workflow examples for teams</a></li> <li><a href="#business-value">Business value and operational impact</a></li> <li><a href="#limits">Current limitations and what to watch</a></li> </ul> <h2 id="what-is-gws">What gws actually is</h2> <p>gws is a new command line interface that brings together a large part of the Google Workspace ecosystem into one developer facing tool.</p> <p>Instead of building and maintaining separate handling for Gmail, Drive, Calendar, Sheets, Docs, Chat, Admin, and other services, developers can work through one command layer with structured output.</p> <p>That is the real appeal. The value is not simply that it talks to many Google services. The value is that it does so in a more unified way.</p> <p>For AI systems, consistency matters. Agents work better when tools behave predictably, return structured results, and do not require custom logic for every single service.</p> <br><br> <h2 id="why-it-matters">Why this matters for AI agents and automation</h2> <p>Most agent workflows break down at the integration layer.</p> <p>The reasoning model may be strong, but the workflow becomes fragile once it has to connect to multiple APIs, manage different schemas, handle different authentication patterns, and translate outputs between systems.</p> <p>That is where gws becomes interesting.</p> <p>If one tool can act as a consistent bridge into Google Workspace, developers spend less time building glue code and more time designing useful workflows.</p> <p>For a solo builder, that means faster prototyping.</p> <p>For a team, it means lower maintenance overhead and fewer brittle automation chains.</p> <br><br> <h2 id="one-interface">One interface instead of many APIs</h2> <p>This may sound like a technical detail, but it has practical consequences.</p> <p>Without a unified interface, an agent that needs to:</p> <ul> <li>Read an email</li> <li>Check a calendar event</li> <li>Open a document</li> <li>Update a spreadsheet</li> </ul> <p>usually needs four different integrations, four different ways of thinking about data, and a lot of custom handling.</p> <p>With gws, the promise is much simpler: one interface, one general operating model, and structured JSON output that an AI model can reason over more easily.</p> <p>That does not remove complexity entirely, but it reduces a major source of friction.</p> <p>For developers building internal tools, executive assistants, support automations, or scheduling workflows, this simplification can save a surprising amount of engineering time.</p> <br><br> <h2 id="dynamic">Why the dynamic discovery model matters</h2> <p>One of the most interesting parts of gws is that it is described as being dynamically built from Google’s Discovery Service.</p> <p>That matters because Workspace products evolve constantly. New endpoints appear, capabilities change, and integrations that felt current six months ago can become outdated quickly.</p> <p>In a more traditional setup, every change creates maintenance work.</p> <p>In a dynamic model, new Workspace capabilities can potentially become available much faster without waiting for a separate client tool to be manually rebuilt and redistributed.</p> <p>That is especially valuable for agent builders, because agents become more useful when the tool layer keeps pace with the platform they depend on.</p> <p>It also suggests something important about the direction Google is taking: this is not just a CLI for developers. It looks increasingly like infrastructure for agent ready workflows.</p> <br><br> <h2 id="agent-actions">What agents can realistically do with it</h2> <p>There is a difference between what sounds possible in a demo and what is realistically useful in daily work.</p> <p>The strongest use cases are not “AI does everything.” They are focused, bounded tasks where agents save time without creating chaos.</p> <p>Examples include:</p> <ul> <li>Scheduling meetings after reading context from an email thread</li> <li>Updating a Google Sheet after a support interaction</li> <li>Finding and organizing files in Drive</li> <li>Summarizing a document and drafting follow up notes</li> <li>Creating a daily digest from Calendar, Gmail, and Docs</li> </ul> <p>These are not speculative. They are the kinds of repetitive, structured tasks teams already do every week.</p> <p>The difference is that gws makes it easier to expose those actions to an agent through one common layer.</p> <br><br> <h2 id="team-workflows">Concrete workflow examples for teams</h2> <p><strong>Workflow 1: Sales follow up assistant</strong></p> <p>A sales rep finishes a meeting and drops a short note into a system. An agent then uses gws to:</p> <ul> <li>Read the previous Gmail thread</li> <li>Pull the next available time slots from Calendar</li> <li>Draft a follow up email</li> <li>Update a tracking sheet with status and next step</li> </ul> <p>The rep reviews and sends.</p> <p>This saves time without removing human control.</p> <br><br> <p><strong>Workflow 2: Executive daily briefing</strong></p> <p>Each morning, an internal agent can gather:</p> <ul> <li>Today’s calendar events</li> <li>Unread high priority emails</li> <li>Recent updates from a shared document</li> <li>Open tasks from a project sheet</li> </ul> <p>Then it produces a concise morning briefing.</p> <p>This is a simple workflow, but it is exactly the kind of thing that becomes much easier when one tool can access multiple Workspace surfaces consistently.</p> <br><br> <p><strong>Workflow 3: Support operations assistant</strong></p> <p>After a support case is resolved, an agent can:</p> <ul> <li>Create or update a shared troubleshooting doc</li> <li>Log the case outcome in Sheets</li> <li>Send an internal summary to Chat</li> <li>Schedule a follow up reminder in Calendar if needed</li> </ul> <p>That is not glamorous. But it is operationally valuable.</p> <br><br> <p><strong>Workflow 4: Document driven project coordination</strong></p> <p>A team working from Google Docs and Sheets often has scattered status updates.</p> <p>An agent using gws can:</p> <ul> <li>Read the latest planning document</li> <li>Extract action items</li> <li>Match deadlines against Calendar</li> <li>Update a project sheet with the latest responsibilities</li> </ul> <p>Instead of asking humans to manually sync everything, the system helps keep the operational layer tidy.</p> <br><br> <h2 id="business-value">Business value and operational impact</h2> <p>The biggest value of gws is not convenience. It is operational leverage.</p> <p>For teams, that leverage shows up in three ways.</p> <p><strong>First, faster prototyping.</strong></p> <p>Developers can build and test agent workflows more quickly when one tool gives them access to many Workspace services.</p> <p><strong>Second, lower maintenance.</strong></p> <p>Fewer custom integrations means fewer places where workflows break when APIs shift or authentication logic changes.</p> <p><strong>Third, better workflow design.</strong></p> <p>When the tool layer is simpler, teams can spend more energy deciding what should be automated and where human review still matters.</p> <p>This is what separates useful agent systems from flashy demos. The real win is not “look what AI can do.” The real win is “this now fits into how our team actually works.”</p> <br><br> <h2 id="limits">Current limitations and what to watch</h2> <p>At this stage, gws still appears to be positioned as an experimental developer example rather than a fully mature enterprise platform.</p> <p>That means teams should be careful not to confuse promising direction with finished infrastructure.</p> <p>Things worth watching closely:</p> <ul> <li>Authentication and access control patterns</li> <li>Permission scoping for sensitive Workspace data</li> <li>Logging and auditability for agent actions</li> <li>Stability of behavior across products and updates</li> <li>How well it integrates into larger agent frameworks over time</li> </ul> <p>In other words, the direction is exciting, but the right mindset is still developer preview, not blind trust.</p> <p>Used thoughtfully, though, gws could become one of the more important building blocks in the next wave of agentic productivity tooling.</p>
Get in Touch
Want to explore how AI can work for you? Reach out today!

