Back to Blog
ArticleMarch 25, 20269 min read

Cloud Hosting for Business Software: What You're Actually Paying For

"Cloud hosted" is one of the most overused phrases in software. It could mean a single virtual machine or a multi-region, redundant infrastructure. Here's what the components actually are, what uptime SLAs really mean, and what to ask before you sign anything.


"Cloud hosted" is one of the most overused phrases in software. Here's what it actually means operationally — and what you should be asking about it.


When a software vendor or development partner says their solution is "cloud hosted," that phrase carries almost no specific meaning on its own. It could mean anything from a single virtual machine rented from a cloud provider to a multi-region, auto-scaling infrastructure with redundant databases and a 99.99% uptime SLA. The gap between those two things — in cost, reliability, capability, and operational responsibility — is enormous.

This post breaks down the actual components of cloud hosting, the different models under which business software gets hosted, what uptime guarantees actually mean, and the questions worth asking before you sign anything.


Server infrastructure in a modern data center — the physical foundation beneath cloud-hosted software


What "the cloud" actually is

Cloud computing refers to computing resources — servers, storage, databases, networking — delivered over the internet by large infrastructure providers rather than run on hardware owned and managed by the organization using them.

The major cloud providers — Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure — operate enormous physical data centers globally and rent access to computing resources on demand. When someone says software is "in the cloud," they mean it's running on hardware in one of these providers' data centers, accessed over the internet.

The practical implications:

You don't own the hardware. The physical servers running your software belong to AWS, GCP, or Azure. You rent access to them, typically billed by usage (compute time, storage consumed, data transferred).

You can scale without buying hardware. Adding more server capacity is a configuration change and a billing change, not a procurement process. This is one of the core operational advantages of cloud infrastructure.

Geographic distribution is possible. Cloud providers have data centers across multiple regions and countries. Software can be deployed closer to users, or replicated across regions for redundancy.

"The cloud" is infrastructure, not a guarantee of anything. An application running on AWS with no redundancy, no backups, and no monitoring is "cloud hosted" — but it doesn't have the reliability characteristics people typically imagine when they hear that phrase.


The core components of a hosted application

A hosted web application is not a single thing — it's several components working together. Understanding what they are makes it easier to evaluate what's actually being provided.

Compute (application servers). The virtual machines or containers that actually run the application code. Every time a user loads a page or makes a request, a server processes it. Compute is usually the most visible component of hosting.

Database. The system where application data is stored and retrieved. Databases can be hosted on the same server as the application or on separate managed services (like AWS RDS or Supabase). The database is typically the most critical component — losing application code is recoverable; losing data often isn't.

Storage. Where files, images, documents, and other binary assets live. Usually separate from the database, often in object storage (like Amazon S3) that is highly durable and scalable.

Networking. Load balancers that distribute traffic across multiple servers, CDNs (Content Delivery Networks) that cache static content closer to users, DNS configuration that routes requests to the right servers, and firewall rules that control what traffic is allowed.

Caching layer. Systems like Redis or Memcached that store frequently-accessed data in memory so it can be retrieved faster than querying the database. Not every application uses one, but high-traffic applications typically do.

Background job processing. Many applications run tasks asynchronously — sending emails, processing uploads, generating reports — separate from the main request-handling servers. This requires its own infrastructure.

Understanding which of these components exist in an application's architecture, how each is configured, and who is responsible for each is the basis for a real conversation about hosting.


Hosting models: what you're actually getting

Unmanaged hosting — You (or your team) provision virtual machines from a cloud provider and configure everything yourselves: the operating system, runtime environment, web server, database, security settings, and so on. Maximum control, maximum operational responsibility. Appropriate for organizations with dedicated infrastructure engineering capacity.

Platform-as-a-Service (PaaS) — Providers like Railway, Heroku, Fly.io, and Render abstract away the underlying infrastructure. You provide application code; the platform handles deployment, scaling, and much of the operational management. Less control over infrastructure specifics, significantly less operational overhead. A common choice for applications that don't require fine-grained infrastructure control.

Managed services — Cloud providers offer fully managed versions of specific components: managed databases (AWS RDS, Google Cloud SQL), managed caching (ElastiCache), managed queues (SQS). These eliminate the operational burden of running those components yourself while keeping them within the cloud ecosystem.

Fully managed by a development partner — The development team or engineering partner owns and operates the full infrastructure stack, handling provisioning, configuration, security, monitoring, updates, and incident response. From the client's perspective, the software runs reliably and the infrastructure is someone else's operational responsibility.

Most production applications use a combination of these: a PaaS for application deployment, managed services for databases and caching, and cloud storage for assets — with varying degrees of partner involvement in ongoing management.


What uptime SLAs actually mean

Service Level Agreements (SLAs) for uptime are expressed as a percentage of time the service is available in a given period. The numbers look similar but represent very different amounts of downtime:

SLADowntime per yearDowntime per month
99%87.6 hours7.3 hours
99.9%8.76 hours43.8 minutes
99.95%4.38 hours21.9 minutes
99.99%52.6 minutes4.4 minutes
99.999%5.26 minutes26.3 seconds

A few things the SLA number doesn't tell you:

What counts as downtime. SLAs typically define "downtime" as complete unavailability, not degraded performance. An application running at 10% of normal speed may not constitute an SLA violation even if it's practically unusable.

What the remedy is. Most cloud provider SLAs offer service credits (a discount on future billing) when SLA targets are missed. This is compensation, not recovery — it doesn't address the business impact of the downtime itself.

Whether it applies to your specific configuration. SLA guarantees from cloud providers apply to their infrastructure components. An application with a single point of failure in its architecture can go down even if the underlying cloud infrastructure is fully available.

Achieving high availability — 99.99% and above — requires architectural choices (multiple availability zones, redundant components, automatic failover) that go beyond simply choosing a cloud provider.


A development team reviewing infrastructure architecture and deployment configuration


Backups: what they cover and what they don't

A backup is a point-in-time copy of data that can be used to restore the system to that state if the current data is lost or corrupted. Backups are necessary but not sufficient for data protection.

What to understand about any backup arrangement:

Frequency. How often are backups taken? Daily backups mean that in a data loss event, you may lose up to 24 hours of data. For many applications this is acceptable; for others it isn't.

Retention. How long are backups kept? If data corruption is discovered two weeks after it occurs (which is more common than it sounds), backups that are only retained for seven days don't help.

What's actually backed up. Application databases are typically the priority. But what about file storage? Configuration files? Third-party service credentials? A backup strategy that covers only part of what's needed to restore a system is incomplete.

Recovery testing. A backup that has never been tested is an assumption, not a guarantee. Periodic restoration tests — actually restoring from a backup to a test environment and verifying the result — are the only way to know that backups are functional.

Recovery time. How long does a restoration actually take? For a database of significant size, restoration from backup can take hours. That's the actual recovery window, not just the time to locate the backup.


The security responsibility model

Cloud hosting involves a shared responsibility model between the provider and the operator. Understanding where the boundary is matters.

What the cloud provider is responsible for:

  • Physical security of data center facilities
  • Security of the hypervisor and virtualization layer
  • Network infrastructure security
  • Availability and durability of their managed services

What the operator (you or your partner) is responsible for:

  • Operating system and runtime configuration on compute instances
  • Application security (authentication, authorization, input validation)
  • Network configuration (firewall rules, security groups, VPC configuration)
  • Identity and access management (who can access what in the cloud environment)
  • Data encryption at rest and in transit
  • Secrets management (API keys, database credentials, environment variables)
  • Monitoring and logging configuration

The cloud provider secures the infrastructure. The application and its configuration security is the responsibility of whoever operates it. This distinction is frequently misunderstood — moving to AWS does not transfer responsibility for application security to Amazon.


Questions worth asking about any hosting arrangement

Before accepting a hosting arrangement — whether from a software vendor, a development partner, or your own team — these questions surface the important details:

  1. Where specifically is the application hosted? Which provider, which region, which services?
  2. What is the redundancy architecture? What happens if a single server fails? A single availability zone?
  3. What is the backup frequency and retention period? What's the recovery point objective (how much data could be lost)?
  4. How long does restoration actually take? What's the recovery time objective?
  5. Who gets alerted when something goes wrong? What does the on-call or monitoring arrangement look like?
  6. Who owns the cloud infrastructure accounts? If the relationship with a hosting partner ends, can you take over the infrastructure?
  7. What is the SLA, and what does "downtime" mean in the context of that SLA?
  8. What security review process was applied to the infrastructure configuration?

The answers to these questions reveal what's actually being provided — not the marketing version, but the operational reality.

Written by

Chris Coussa

Founder, Day2 Innovative Technical Solutions

Start a Project
All Articles
Cloud Hosting for Business Software: What You're Actually Paying For | Day2 ITS Blog | Day2 ITS