Cloud Architecture for SME AI: BigQuery, Cloud Functions, and Beyond

Practical cloud architecture patterns for SMEs deploying AI. How to use BigQuery, Cloud Functions, and managed services cost-effectively.

Alistair Williams23 February 20267 min read

There is a misconception that AI infrastructure requires enterprise-scale budgets and dedicated DevOps teams. It does not. The cloud platforms available today allow a ten-person company to run production AI systems with the same architectural sophistication as a Fortune 500, at a fraction of the cost.

The trick is knowing which services to use, how to combine them, and what to avoid. After building AI infrastructure for dozens of UK SMEs, we have converged on a core architecture that is reliable, cost-effective, and genuinely manageable without a full-time cloud engineer.

The Core Stack: Why We Start Here

Our standard AI architecture for SMEs centres on three Google Cloud services: BigQuery for data warehousing, Cloud Functions for processing logic, and Cloud Scheduler for orchestration. This is not arbitrary brand loyalty. It is a pragmatic choice based on cost, capability, and operational simplicity.

BigQuery charges for storage and queries, not for keeping a server running. An SME with a few hundred thousand rows of data pays pennies per month for storage and a few pounds for queries. Compare that with running a PostgreSQL instance 24/7, which costs upwards of £50/month even at the smallest tier, regardless of whether anyone queries it. For AI workloads that involve periodic analysis rather than constant transactions, BigQuery's pricing model is dramatically more cost-effective.

Cloud Functions execute code only when triggered, and you pay only for execution time. A function that processes 1,000 documents per day might run for a total of ten minutes. At Google Cloud's pricing, that costs less than £2/month. The equivalent always-on server would cost £30-50/month. More importantly, Cloud Functions scale automatically. If your processing volume doubles, the infrastructure handles it without intervention.

Cloud Scheduler triggers Cloud Functions on a schedule. Daily data syncs, weekly reports, hourly health checks. Simple, reliable, and costs almost nothing.

This combination gives you a production-grade AI infrastructure for under £20/month in most cases. The cost scales linearly with usage, so you are never paying for capacity you do not need.

Data Architecture: Getting Your Data Where AI Can Use It

The most common barrier to AI adoption for SMEs is not the AI itself. It is getting data into a usable state. Your business data lives in half a dozen systems: your CRM, your accounting software, your ecommerce platform, your email marketing tool, your spreadsheets. Each one has its own format, its own quirks, and its own API.

Our standard data architecture has three layers:

Ingestion layer. Cloud Functions that connect to each source system, extract data, transform it into a standardised format, and load it into BigQuery. Each connector runs on a schedule appropriate to the data: daily for accounting data, hourly for order data, every fifteen minutes for customer interactions.

Warehouse layer. BigQuery tables organised by domain: customers, orders, products, marketing, operations. Each table has a consistent schema with standardised date formats, currency fields, and identifiers. This is your single source of truth.

AI layer. Views and materialised queries that prepare data specifically for AI consumption. These aggregate, join, and transform the warehouse data into the features your models need. When a model requires "average order value by customer over the last 90 days", that calculation lives in a BigQuery view, not in your application code.

This separation means each layer can evolve independently. When you switch CRM providers, only the ingestion connector changes. When you add a new AI model, only the AI layer views change. The warehouse remains stable.

Serverless vs Always-On: Making the Right Choice

Serverless is not always the answer. Understanding when to use Cloud Functions and when to run a persistent service saves both money and headaches.

Use serverless (Cloud Functions) when:

  • The workload is intermittent or scheduled
  • Processing time per request is under 10 minutes
  • You do not need to maintain state between requests
  • Cold start latency (a few seconds) is acceptable

Use an always-on service when:

  • You need sub-second response times consistently
  • The service handles continuous traffic
  • You need persistent connections (WebSockets, database connection pools)
  • Processing tasks exceed Cloud Function timeout limits

For most SME AI workloads, serverless covers 80-90% of requirements. The typical pattern is a small Node.js or Python application serving the user-facing API (always-on, handling real-time requests), with Cloud Functions handling all the background processing, data syncing, and batch AI operations.

One of our clients runs their entire AI infrastructure this way. The front-end application costs about £15/month on a small Cloud Run instance. The background processing, which includes daily data synchronisation from seven sources, AI-powered report generation, and automated document processing, runs entirely on Cloud Functions at a combined cost of under £10/month.

Security and Data Residency for UK Businesses

UK businesses have legitimate concerns about where their data lives and who can access it. GDPR compliance is non-negotiable, and many businesses also need to satisfy their own clients that data handling meets appropriate standards.

For Google Cloud, we default to the europe-west2 (London) region for all services. This keeps data within the UK and reduces latency. BigQuery datasets, Cloud Function deployments, Cloud Storage buckets, and any other stateful services all run in London unless there is a specific reason to do otherwise.

Service account permissions follow the principle of least privilege. Each Cloud Function gets its own service account with access only to the specific BigQuery tables and Cloud Storage buckets it needs. If a function only reads from the orders table, it cannot access the customers table. This limits the blast radius if any single component is compromised.

Credentials never appear in code. API keys, OAuth tokens, and service account keys are stored in Google Secret Manager or encrypted configuration files that are never committed to version control. We run automated scans as part of every deployment to catch accidental credential exposure.

For businesses handling particularly sensitive data, we implement additional controls: VPC Service Controls to prevent data exfiltration, Cloud Audit Logs for complete access tracking, and customer-managed encryption keys for data at rest.

Cost Management: Staying Predictable

Cloud costs have a reputation for spiralling out of control. For AI workloads, this is a legitimate concern because a misconfigured query or a runaway processing loop can generate unexpected bills.

We implement three safeguards:

Budget alerts. Every project has budget alerts at 50%, 80%, and 100% of expected monthly spend. When a threshold is breached, notifications go to the project owner immediately. This catches anomalies early.

Query cost controls. BigQuery allows you to set maximum bytes billed per query. We set this for every application to prevent accidental full-table scans. A query that would exceed the limit fails with an error rather than running up a large bill.

Resource quotas. Cloud Function invocation limits and concurrent execution caps prevent runaway loops. If a bug causes a function to trigger itself recursively, the quota limit catches it before it processes thousands of unnecessary executions.

With these safeguards, our clients' cloud bills are predictable to within 10-15% month over month. Most SMEs running production AI systems spend between £30 and £150/month on cloud infrastructure, which is less than most SaaS subscriptions.

Building Your Cloud AI Foundation

The architecture described here is not theoretical. It is the standard foundation we deploy during every Mind Build engagement, refined over dozens of implementations. It scales from processing a few hundred items per day to tens of thousands without architectural changes.

If you are considering cloud infrastructure for AI but uncertain about the architecture, cost, or security implications, our Mind Map assessment includes a technology readiness evaluation that maps your current systems and recommends the most cost-effective cloud architecture for your specific needs.

Get in touch to discuss how cloud architecture can power AI in your business without the enterprise price tag.

Alistair Williams

Alistair Williams

Founder & Lead AI Consultant

Built a 100+ skill production AI system for his own agency. Now builds yours.

cloud architectureBigQueryCloud FunctionsSMEGoogle Cloudserverless

Ready to Build Your ArcMind?

Book a free 30-minute discovery call. We'll discuss your business, identify quick wins, and outline how AI can drive real ROI.

Get Started