Allocating AI and other pieces of your workload placement puzzle

Consider performance, latency, security costs and other factors as you mull where to place your applications

Commissioned Allocating application workloads to locations that deliver the best performance with the highest efficiency is a daunting task. Enterprise IT leaders know this all too well.

As applications become more distributed across multiple clouds and on premises systems, they generate more data, which makes them both more costly to operate and harder to move as data gravity grows.

Accordingly, applications that fuel enterprise systems must be closer to the data, which means organizations must move compute capabilities closer to where that data is generated. This helps applications such as AI, which are fueled by large quantities of data.

To make this happen, organizations are building out infrastructure that supports data needs both within and outside the organization - from datacenters and colos to public clouds and the edge. Competent IT departments cultivate such multicloud estates to run hundreds or even thousands of applications.

You know what else numbers in the hundreds to thousands of components? Jigsaw puzzles.

Workloads Placement and... Jigsaw Puzzles?

Exactly how is placing workloads akin to putting together a jigsaw puzzle? So glad you asked. Both require careful planning and execution. With a jigsaw puzzle - say, one of those 1,000-plus piece beasts - it helps to first figure out how the pieces fit together, then assemble them in the right order.

The same is true for placing application workloads in a multicloud environment. You need to carefully plan which applications will go where - internally, externally, or both - based on performance, scalability, latency, security, costs and other factors.

Putting the wrong application in the wrong place could have major performance and financial ramifications. Here are 4 workload types and considerations for locating each, according to findings from IDC research sponsored by Dell Technologies.

AI - The placement of AI workloads is one of the hottest topics du jour, given the rapid rise of generative AI technologies. AI workloads comprise two main components - inferencing and training. IT departments can run AI algorithm development and training, which are performance intensive, on premises, IDC says. And the data is trending that way, as 55 percent of IT decision makers Dell surveyed cited performance as the main reason for running GenAI workloads on premises. Conversely, less intensive inferencing tasks can be run in a distributed fashion at edge locations, in public cloud environments or on premises.

HPC - high-performance computing (HPC) applications ALSO comprise two major components - modeling and simulation. And like AI workloads, HPC model development can be performance intensive, so it may make sense to run such workloads on premises where there is lower risk of latency. Less intensive simulation can run reliably across public clouds, on premises and edge locations.

One caveat for performance-heavy workloads that IT leaders should consider: Specialized hardware such as GPUs and other accelerators is expensive. As a result, many organizations may elect to run AI and HPC workloads in resource-rich public clouds. However, running such workloads in production can cause costs to soar, especially as the data grows and the attending gravity increases. Moreover, repatriating an AI or HPC workload whose data grew 100x while running in a public cloud is harsh on your IT budget. Data egress fees may make this prohibitive.

Cyber Recovery - Organizations today prioritize data protection and recovery, thanks to threats from malicious actors and natural disasters alike. Keeping a valid copy of data outside of production systems enables organizations to recover lost or corrupted due to an adverse event. Public cloud services generally satisfy organizations' data protection needs, but transferring data out becomes costly thanks to high data egress fees, IDC says. One option includes hosting the recovery environment adjacent to the cloud service - for example, in a colocation facility that has a dedicated private network to the public cloud service. This eliminates egress costs while ensuring speedy recovery.

Application Development - IT leaders know the public cloud has proven well suited for application development and testing, as it lends itself to the developer ethos of rapidly building and refining apps that accommodate the business. However, private clouds may prove a better option for organizations building software intended to deliver a competitive advantage, IDC argues. This affords developers greater control over their corporate intellectual property, but with the agility of a public cloud.

The Bottom Line

As an IT leader, you must assess the best place for an application based on several factors. App requirements will vary, so analyze the total expected ROI of your workloads placements before you place them.

Also consider: Workload placement is not a one-and-done activity. Repatriating workloads from various clouds or other environments to better meet the business needs is always an option.

Our Dell Technologies APEX portfolio of solutions accounts for the various workload placement requirements and challenges your organization may encounter as you build out your multicloud estate. Dell APEX' subscription consumption model helps you procure more computing and storage as needed - so you can reduce your capital outlay.

It's true: The stakes for assembling a jigsaw puzzle aren't the same as allocating workloads in a complex IT environment. Yet completing both can provide a strong feeling of accomplishment. How will you build your multicloud estate?

Learn more about how Dell APEX can help you allocate workloads across your multicloud estate.

Brought to you by Dell Technologies.

More about

More about

More about

TIP US OFF

Send us news