I'm just blogging

About Technology and more

Blog

  • Shortcut: NVMe Memory Tiering an optimization boost for your TCO

    The future of memory management in VMware vSphere is here, and it’s set to revolutionize how we handle workloads and optimize our infrastructure. The “Advanced NVMe Memory Tiering” feature is now generally available with the GA release of VCF 9, a feature that promises to significantly boost memory capacity and reduce your total cost of ownership (TCO).

    (Announcement: https://blogs.vmware.com/cloud-foundation/2025/06/19/advanced-memory-tiering-now-available/ )

    What is NVMe Memory Tiering?

    NVMe Memory Tiering allows you to use NVMe devices as a secondary memory tier alongside traditional DRAM. This innovative approach dramatically increases the available memory capacity of your ESXi hosts. Imagine expanding from 64GB of DRAM to several hundred GB with NVMe – the possibilities are immense!

    Key Benefits of NVMe Memory Tiering

    • Massive Memory Expansion: Significantly increase your host’s memory capacity, enabling you to run more and larger virtual machines (VMs).
    • Cost Efficiency: NVMe is significantly cheaper per GB than DRAM, allowing for larger memory footprints at a fraction of the cost.
    • Improved Workload Consolidation: Run more VMs on a single host, enhancing utilization and consolidation ratios.
    • Reduced TCO: Lower overall hardware costs by supplementing DRAM with NVMe, reducing the need for expensive DRAM upgrades.
    • Sizing for Average Usage: Optimize memory sizing for average workloads instead of peak demand, saving on resources and costs.

    Real-World Scenarios and Cost Savings

    The following diagram shows a cost comparison between 32GB, 48GB, 64GB, 96GB & 128GB Modules, for a Host with 32 Modules (net prices)

    The result: 128GB, 96GB & 48GB modules are currently not economic compared to 64GB and 32GB Modules. 

    Consider an environment with low CPU utilization but high memory usage. Without NVMe Tiering, you’d likely need to purchase additional servers, increasing both capital expenditure (Capex) and operational expenditure (Opex). With NVMe Tiering, you can avoid this, enhancing workload density on existing hardware.

    A comparison shows significant cost savings:

    • Scenario 1 (RAM Modules Only): Using traditional RAM upgrades to achieve a +4TB memory increase can lead to a 5-year cost of over €110,000, including hardware and operational expenses.
    • Scenario 2 (NVMe Memory Tiering): Utilizing NVMe for memory tiering can reduce the 5-year cost to approximately €58,000, nearly halving your expenditure.

    Details:

    ComponentCosts
    Scenario 1w/ RAM Modules onlyHPE GreenLake 380 Gen122 Socket with 24 Core eachMemory 32x64GB (2TB)
    moderate discount5yrs Hardware refresh
    2x €20.000
    Opex (Facility, annually)
    Power, Cooling, Rack
    2x €6.000
    Opex (Labor, annually)
    1h/month per ServerWith Fully burdened salary = €150k
    2x €1.020
    Sum (over 5yrs)€110.200
    Scenario 2w/ NVMe Memory Tiering for suitable workloadsHPE GreenLake 380 Gen122 Socket with 24 Core eachMemory 32x64GB (2TB)
    moderate discount5yrs Hardware refresh
    €20.000
    Additional NVMe for Memory Tiering (2x 3.84TB)
    moderate discount2x because of RAID13.84 TB recommend due to performance & lifetime
    €2.835
    Opex (Facility, annually)
    Power, Cooling, Rack
    €6.000
    Opex (Labor, annually)
    1h/month per ServerWith Fully burdened salary = €150k
    €1.020
    Sum (over 5yrs)€57.935

    Yes, you are reading it right, with NVMe Memory Tiering we can talk about a DRAM:NVMe Ratio of 1:1!

    Of course, not all workloads are suitable for NVMe Tiering. It’s necessary to categorize the existing and planned workloads due to their characteristics. Below you will find some more information that needs to be considered before using the technology. Important to note, that NVMe RAM will be available as unmappable RAM in ESXi. That means, NVMe Tiering cannot be addressed directly like DRAM and therefore has different characteristics in terms of latency and access speed:

    • Higher Latency than DRAM: NVMe is fast, but still much slower than traditional memory. Latency-sensitive workloads (e.g., databases) may experience performance degradation.
    • Dedicated NVMe Devices Required: The NVMe device cannot be used for other purposes (e.g., as a datastore) at the same time.
    • Compatibility and Support: Chosen Hardware must follow the VMware compatibility guidelines
    • Lifespan and Heat Generation: NVMe SSDs have a limited lifespan and can generate more heat under heavy load compared to DRAM, potentially requiring additional cooling.

    Suitable Workloads

    NVMe Memory Tiering is ideal for:

    • Virtual Desktop Infrastructure (VDI)
    • Modern Apps workloads
    • Workloads with high memory requirements but low latency sensitivity
    • Workloads with infrequent high memory activity, such as end-of-week transactions or backups
    • SQL Server and Oracle deployments (specific configurations recommended)

    However, it’s currently not recommended for:

    • Latency- and performance-critical applications like real-time analytics
    • Real-time environments
    • In-memory databases

    Conclusion: Optimize Your Infrastructure with NVMe Memory Tiering

    NVMe Memory Tiering is set to be an optimization boost for environments where DRAM is limited, cost is a major concern, and workloads have large but not extremely latency-sensitive memory needs. Now that NVMe Memory Tiering is generally available, it offers substantial TCO reductions and improved workload density.

  • The art of mastering platforms – Part 3

    In this third part of “The Art of Mastering Platforms,” we delve into minimizing cognitive load for both consumers and providers, advertising your platform’s success, and exploring how these approaches can lead to cost savings. Let’s uncover the strategies for streamlining operations and maximizing value in today’s complex IT landscape.

    Minimizing Cognitive Load for Consumers and Providers

    Today’s IT landscape is a highly complex ecosystem, both for consumers and for those providing IT services. On one hand, you have traditional systems that reliably support your business but lack flexibility. On the other, your business demands agility, speed, and the ability to quickly deliver and make use of new services, such as AI.

    While it’s essential to grow and innovate with your platform partner of choice (see also Part 2 of my story), it’s equally important to simplify how services are offered while maintaining governance, regulatory compliance, and scalability. Referring back to the “flow-oriented approach,” (see Part 1 of my story) consumers expect on-demand resources with minimal friction, no more creating multiple tickets for various services and doing a manual integration.

    A better approach is to make foundational services accessible and easy to use straight out of the box, without requiring users to navigate different service behaviors or configurations. Sometimes it’s worth taking a look back at all the requests and taking the top 10 repeating requests and automating them as much as possible. Through this, automated policy enforcement can really streamline operations and free up time for new service development and consumer interaction. 

    Your platform team could define policy sets tailored to specific user groups, such as modern app developers. These policies might grant access to services like Database-as-a-Service or Kubernetes, allowing users to quickly spin up or tear down validated services  within predefined limits for compute, storage, and network usage. If your consumers choose any of these pre-validated services, they will have their Kubernetes cluster or database up and running within 5 minutes. But if they need something outside of the proven standard specifications, it will take them longer. And all of a sudden, that specific requirement is not that important anymore. Policies would also enforce restrictions on what is not permitted.

    This model enables each product or team environment to operate independently without disrupting other business units or applications. It provides clear cost transparency through charge-back and show-back mechanisms, all delivered in a self-service model that eliminates the need for interaction with multiple teams. And this leads to a very important point in today’s complexity: This reduces cognitive load for consumers, saves time, accelerates time-to-value, and ensures business resilience. 

    What applies to consumers should also apply to your platform team. Consider lifecycle management in today’s era of “gluing everything together.” What value does it add to your business if your team has to manage software dependencies, integrations, upgrades, or firmware updates manually? Shouldn’t these tasks be automated by the platform itself? Imagine having to handle software and hardware upgrades in the public cloud, this would be unthinkable. In a platform world, focus is on building services that matter to consumers. More time needs to be spent with those users to better understand what to build next. This is also what defines a platform. A platform is only a platform the added value helps the business to generate added value to their customers.

    Reducing cognitive load for your platform team is just as critical as doing so for consumers. By automating routine tasks like upgrades and dependency management, your platform team can focus on delivering high-quality services that drive business value instead of being bogged down by technical debt that doesn’t align with your business goals.

    Optimizing labor effectiveness and organizational structures brings an additional advantage: the opportunity to implement a shared responsibility model. This model clearly defines who owns and handles specific tasks, fostering accountability and collaboration. As a result, it reduces the mean time to resolve issues, enhances flexibility, and gives customers greater control over their applications. It also prevents the common pitfalls of “throwing issues over the fence” or assigning blame when errors occur.

    Advertise your success

    It’s interesting to observe that customers build excellent services on truly cloud-like platforms, yet often fail to promote the technology and its added value internally. For example, I recently spoke to a customer whose platform achieved 99.99% uptime, supports hundreds of business-critical workloads, and offered a delivery model enabling consumers to obtain resources within minutes, all in a fully compliant corporate environment, without the usual ticketing headaches. Remarkably, the entire company had been organized according to SAFe principles for two years.

    When I asked whether they showcased these impressive KPIs at each Product Increment review, I was surprised by their response: they didn’t see the benefit. Yet, these very metrics, uptime, delivery speed, and compliance, are essential for promoting the platform to both existing and potential new consumers. Highlighting such achievements demonstrates the stability of the platform, the efficiency of the delivery model, and its ability to compete with third-party platforms (e.g. public clouds).

    Many people, especially from the business, still believe that innovation is only possible with public cloud services, but these results prove that internal platforms can deliver comparable value. This conversation really changes when the platform team starts to build e.g. joint success stories with some of the business critical services teams that run on the platform. All public cloud providers are building and promoting success stories with their customers. And the same should apply to any platform team. 

    At a previous job, we started displaying metrics on screens throughout the office, visible at all times. Whenever managers, customers, or colleagues from other departments visited, these metrics sparked conversations and made things more engaging.

    Following that, we began using these KPIs in more forums, meetings and conferences to position our work, for example, “We onboarded x new customers,” or “It took x days to get customers into production,” etc. There are plenty of KPIs, stories, and even challenges you can highlight and discuss.

    In the daily life of a platform team, success is about more than just technology-it’s about treating your platform as a product. A product is never truly finished; it requires ongoing attention to budgets, customer needs, and internal promotion. To maximize the impact of your platform, you must continuously communicate its value and successes within your organization.

    How those approaches help saving costs

    In conclusion, the mentioned strategies and approaches can help organizations reduce costs while improving efficiency and innovation. By adopting modern platforms like VMware Cloud Foundation, streamlining processes, and leveraging automation, businesses can minimize operational complexity, reduce resource waste, and focus on delivering value. 

    Below are the key ways these ideas lead to cost savings:

    • Transition from siloed IT structures to integrated, flow-oriented platforms
    • Automate routine tasks like policy enforcement, upgrades, and dependency management
    • Use self-service models to reduce reliance on IT support teams
    • Leverage standardized cloud platforms instead of building custom solutions
    • Optimize workforce roles to focus on strategic tasks rather than manual processes
    • Implement shared responsibility models to improve accountability and reduce downtime
    • Avoid technical debt by automating lifecycle management and using scalable solutions
    • Reduce compliance-related costs with automated guardrails and governance tools
    • Accelerate time-to-value by adopting pre-integrated solutions from trusted partners

    By implementing these strategies, organizations can achieve significant cost savings while enhancing flexibility, scalability, and innovation. These measures not only reduce operational expenses but also position your businesses for long-term success in a competitive environment.

    Of course, building everything yourself is an option, and it was common in the market over the last decades. However, as I mentioned in Part 2, it often makes sense to focus on your company’s intellectual property and leverage existing capabilities. For example, Broadcom is investing $2 billion annually in VMware Cloud Foundation to address these challenges, enabling customers to adopt modern paradigms and build robust cloud platforms. This $2 billion is often significantly more than many of my customers’ entire IT budgets. This is something to consider when planning to build a private cloud platform.

    And lastly, everything I’ve written can be met with “it depends” or “but what about xyz?” I understand the landscape is diverse and often customer-specific. In this context, I still believe moving forward with standard software for commodity services is the best choice for building a private cloud. After all, anyone can adopt a public-cloud-first strategy, but doing the same on-premise often seems like a non-starter.

    I hope you enjoyed my 3-part blog series of “The art of mastering platforms” and I would like to thank you for taking the time to read through it!

    Should any of these topics spark your interest and you wish to discuss them further, please reach out. I’m eager to connect!

  • The art of mastering platforms – Part 2

    In Part 1 of my series, I talked about how cloud platforms connect people and businesses, why understanding them is useful, and why the platform approach is important in today’s economy. In this second post, I will focus on the importance of aligning platforms with your business goals to drive success, and how innovating together with your partners can help you achieve even better results.

    Align Platforms with your Business Goals, to Drive Success

    Aligning business goals with IT capabilities is not a new concept. However, in the era of platform building, it has become even more critical to design platforms that align with your business objectives. This alignment ensures that any investment is clearly supporting a joint goal and is the foundation to justify budget allocations. Imagine investing heavily in a cutting-edge platform only to find it underutilized, or worse, being unable to articulate the value it delivers to your business.

    To avoid this, it’s essential to collaborate with consumers, teams in your organization and partners to identify both gaps and capabilities needed to support strategic goals and their associated use cases. For instance, if a company aims to introduce automation into its business processes to handle growth and accelerate customer interactions, the underlying platform must enable automation of common tasks and actions for all involved applications. Consider automated policy enforcement: would sticking to a manual model support these goals effectively?

    As an example, one of my FSI customers is excelling in maintaining compliance while implementing strict processes to ensure regulatory adherence. However, weaknesses emerged during our discussions about bringing new workloads into production. Currently, this process requires filling out web forms and documents; once approved, the workloads can be deployed to production. Nevertheless, this is a one-time audit. Once in production, auditing must be initiated manually, and checks and guardrails are also conducted manually. Transforming this into automated policy enforcement for these workloads would provide significant benefits for the customer, not only throughout the development-to-production chain but also for live auditing and reporting in production.

    Having a clear understanding of the required capabilities and how they align with business objectives is vital, not only for achieving strategic goals but also for justifying future investments and securing budgets.

    When I talk to customers, I often find that they are aware of directives from their managers, and those managers, in turn, follow directives from their own leaders, and so on up the chain. This highlights a common challenge: it can be difficult for employees to truly identify with the company’s goals and understand the purpose behind their daily work and investments of time.

    To address this, we have introduced a method where we collaboratively map out which services are necessary to support the company’s strategic objectives and what a future-state architecture could look like. In addition, we also establish the target operating model by outlining how the necessary services would be provided by the existing model and how this should look with an optimized model. This exercise often serves as an eye-opener for participants, providing them with new ideas and motivation to engage with innovative topics and support these initiatives internally. By connecting their work to clear strategic goals, employees find something meaningful to identify with, something that gives purpose to their efforts.

    This approach represents a different model of motivation. Not only does it increase employee satisfaction and drive innovation, but it also empowers teams with tangible evidence to present to the organization’s leadership. By linking initiatives, such as building a new cloud platform, directly to the company’s strategic goals, employees gain a clear narrative and concrete metrics to support their case. This alignment enables more productive discussions with leadership about how technology investments contribute to broader business objectives and helps secure buy-in for future innovation.

    Innovate together with your partner

    As mentioned earlier, organizations increasingly recognize the transformative potential of cloud platforms. And just to reiterate it here: all cloud platforms. Private and Public. Hyperscale and Edge. Cloud is an operating model, not a place. And one key advantage of cloud platforms is that resources are available on-demand and seamlessly integrated. You don’t need to worry about how to connect them, your provider ensures everything works together. This flexibility and speed significantly accelerate your time-to-value.

    In the past, building IT platforms often involved a best-of-breed approach, requiring experts to manage numerous tools and their integrations. This approach demanded ongoing maintenance, updates, and upgrades, all of which fell on your team. While attracting and retaining the right talent is critical, it also consumes significant time and budget. Ultimately, this effort delays time-to-value for solutions that are already available in the market.

    In the past month, I had many discussions with customers, particularly on this topic. Many of them are already quite advanced in automation. They build their environments using solutions from different vendors and integrate everything themselves. The technology they create is impressive. However, they also realize that they must handle ongoing maintenance, version upgrades, interoperability, and dependencies on external libraries to keep everything working together.

    This requires expertise in areas that may not be core to their company’s intellectual property. Instead of focusing on activities that directly drive business value, such as selling more products, significant time is spent building platforms that are already available off-the-shelf. As a result, the wheel is reinvented dozens of times globally, each company doing it for themselves, while vendors are simultaneously developing standard software to address these needs.

    This trend is evident not only with public cloud providers, who offer standardized solutions, but also with on-premises cloud software.

    You need to identify the best pathway to choosing the most suitable Private Cloud model for your needs, one that supports your business goals and intellectual property, enabling your organization to grow.

    For example, Broadcom invests $2 billion annually1 in VMware Cloud Foundation to deliver a standardized, integrated cloud platform. If you were to build your own solution using multiple vendors, what level of investment would be required to match that pace? This challenge becomes even more pronounced when incorporating advanced services like Artificial Intelligence or Database-as-a-Service. In a self-built environment, you must constantly consider how to construct, integrate, maintain, and extend your stack. Would you take the same approach in the public cloud?

    The semiconductor industry exemplifies the immense investment and time required to achieve success in highly specialized fields. For instance, producing machines to manufacture wafers for chips demands decades of effort and billions in investment. The same applies to designing or producing chips, building your own chips from scratch would take decades to realize value and generate significant revenue. By the time you reach parity with competitors, they will have already advanced further with new innovations and technologies.

    In the past, as an example, customers bought from VMware and then had to build their cloud (or Enterprise IT). Now, its all combined in a streamlined private cloud offering, backed by $2 billion annually R&D budget, which by the way, is similar to the overall IT Budget of NASDAQ or DAX40 companies.

    One of my customers, who for years insisted they could build an even better cloud solution in-house while automating everything, has now decided to move forward with VMware Cloud Foundation. They are really proud about what they have built in the past, but having realised that keeping up with the expertise, maintenance, expansions and pace of established providers is a never-ending race. Therefore, they now plan to use as many templates, blueprints, and integrations from the platform as possible.

    In summary, while custom-built solutions can be powerful, leveraging standardised platforms can free up resources to focus on core business objectives, rather than duplicating efforts that vendors have already solved. Instead of starting from scratch, businesses should focus on partnerships and leveraging established platforms to stay competitive in this fast-evolving industry.

    In the next and final post of this series, I’ll talk about what “reduce cognitive load” means, how to promote your platform and explain how all these strategies together can help you optimise your TCO.

    1. Blog Post by Hock Tan on “Accelerating VMware’s growth” https://www.broadcom.com/blog/accelerating-vmwares-growth ↩︎
  • The art of mastering platforms – Part 1

    As we come closer and closer to one of our most impactful Product releases of VMware by Broadcom, I start a small series about the necessity of harmonisation between Technology and Operating Model, beside all our nice feature announcements.

    The intent is to give you some food for thought, on how both together will help you be successful with your business.

    But first, let’s start with giving some context.

    Many of my customers today are confronted with a perfect storm. Legacy systems demand maintenance in times when business expectations rise, market trends accelerate, talent gaps widen – all while security and compliance requirements multiply. On top, we see geopolitical uncertainty, where sovereignty becomes a new priority.

    The pace of digital transformation has turned technological adaptation from a “nice to have” competitive advantage into a “must have” survival skill. It’s a paradox: while public cloud adoption promises agility, cost reduction and fast time to value, many end up with multiple clouds, grappling with fragmented infrastructures, spiraling costs, and operational complexity. The real challenge isn’t choosing between on premise, public or private cloud solutions – it’s the art of mastering platforms.

    Oftentimes, the transformative potential of cloud platforms is already recognized, yet many find themselves in hybrid environments, burdened by legacy systems they can’t abandon and cloud deployments that fail to deliver promised agility. The result? Operational complexity that stifles competitiveness and budgets that balloon without clear return of investment.

    In today’s economy it’s not just about technology choices, it’s about an era where “doing nothing” risks obsolescence, while reactive, FOMO-driven decisions (fear of missing out) create technical debt. A “quick and dirty” solution today is tomorrow’s operational nightmare. The real challenge lies in building adaptive platforms that focus on reducing cognitive load for both consumers & providers, under an umbrella of an adaptive operating model, which is scalable, flexible and able to maintain the infrastructure in the long run with clear responsibilities.

    For sure, not everything is perfect, every organization is somehow unique, but even so, we have seen similarities in the field, which I want to share to give you some food for thought.

    In the following and upcoming Blogposts, I will highlight possible scenarios and actions you can take, based upon experiences that I’ve seen so far in the last couple of years working with customers on building Cloud platforms.

    Synchronised evolution of technology and operating models

    Deploying enterprise platforms, whether hybrid, on-premises, or cloud-native successfully means more than adopting new tools; it requires an evolution of technology as well as operating models.

    Picture: Cloud will only be successful considering both: Technology & Operating Model

    These environments operate under fundamentally different paradigms. Simply replicating current structures, processes, and behaviors on cloud platforms often fails.

    To quote Melvin Conway from the 60s:

    Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

    For over 30 years, IT has followed a “resource-oriented approach,” optimising individual resources with subject matter expertise. While effective in silos, this approach often neglects how other necessary resources are delivered. Consumers frequently request resources silo by silo, a process that can take months due to differing priorities and capacities, frustrating consumers, businesses, and IT departments.

    When I started working with public cloud roughly 15 years ago, it was completely different from what I had experienced before in classic enterprise IT. Product teams were able to request resources according to their needs, instantly, and with transparent showback and chargeback mechanisms.

    As we know, business services are not just a single virtual machine with storage. More often, they consist of a set of different technologies required to build and deliver the service, such as messaging capabilities, document storage, databases, and more.

    In classic enterprise IT, providing all those resources on-demand, wouldn’t even have been possible due to the resource-oriented approach. Everyone focused on their own service and wanted to offer it in the best possible way, but without considering the overall consumption or the bigger picture.

    To address this, a “flow-oriented approach” emerged with cloud platforms. This approach focuses on delivering services as a unified flow rather than requiring resource-by-resource requests. It feels like obtaining all necessary resources from one source.

    Many organisations tried to solve this and established some form of Service Management function, this function was mainly orchestrating individual silos and aggregating information to the requestor. Many learned that achieving an unified flow only through closer collaboration between silos is often not solving the real issue.

    What we see working better is a shift in organizational behavior, restructuring teams and responsibilities to offer resources in a self-service manner to consumers. But also, to introduce a Platform-as-a-Product thinking into the organization. That thinking and responsibility also helps the team to build and provide an even better Service, release after release, with focus on their customers and what they really need.

    Additionally, cloud platforms thrive on standardised, self-service consumption. Providers must rethink governance, lifecycle management, and cost accountability. Automated policy enforcement (“define once, apply universally”) and compliance guardrails integrated into resource provisioning are essential to quickly adapt to new compliance and regulatory policies.

    I mentioned the time when I started working with product teams using public cloud. We simply opened the door and flooded the teams with all these possibilities. Can you imagine what happened? There was a complete mix-up in the consumption of services, a nightmare in terms of compliance and auditing those environments. That was a period when we had to rebuild our cloud platform multiple times, learning from each new issue that arose.

    However, it was also a time when we learned to adopt new practices for our operating models, such as offering the platform in a policy-driven way and building virtual private clouds for each product team. This helped us to structure, automate, and standardize our cloud platform, enabling us to offer it in a scalable way while remaining flexible to new regulatory requirements.

    Although these experiences happened a long time ago, they are still relevant today. In discussions with many organisations, I often find that many of them have not yet explored this way of thinking. Adopting these practices helps organisations rethink how work is done, not just where it runs. This enables teams to transition from reactive “operator” roles to proactive “orchestrator” roles, allowing infrastructure to scale intentionally rather than sprawling accidentally.

    Of course, implementing such significant changes is not something that happens within a few months or simply through discussions with peers from other departments. Often, it involves confronting cultural resistance and integrating new approaches with existing systems.

    Many of the conversations I have with teams revolve around a culture that has been built over decades. These organisations have experts in many fields like networking, storage, compute, security, application development, and more. Just because you have a solution that you find innovative or “cool” doesn’t mean they automatically agree with you. Convincing them can be challenging, but everything starts with including them in the discussions. On the other hand, cloud platforms need exactly this consolidation and offer the opportunity to start those conversations.

    This is the starting point of change! It’s not innovation, you have to start with, its called Exnovation! It defines the call to action, stop doing what you have always done!

    We often find that involving different departments right from the beginning is key to success. It is important to clearly articulate the benefits and goals of the new platform or technology, and asking those teams for their advice is a strong starting point. Moreover, no solution on the market is absolutely perfect. Through detailed discussions, we frequently identified problems that other departments face with their current solutions – problems that the new technology might overcome. However, these insights only emerge when you engage in open, equal discussions with your peers.

    So, what is the conclusion? It’s not a matter of technology nor of using or just calling it Cloud. Today’s expectations for Consumers and Providers differ, depending on what was built in the last decades. Just designing a system which represents your current organisation structure, will not be successful as a new approach could be. Think about new methods on providing services and how to manage them with patterns we learned over two decades from public cloud. Include your peers as soon as possible and use their expertise.

    But, and that’s important, don’t over-engineer everything at the beginning, start with small steps, start with a small service to establish the first new operating model along the full end-2-end process, and grow from that baseline to start your cloud journey.

    In the next Post i’ll write about why its quite helpful to map your business and IT goals with your platform beforehand, how this support you in your journey and why innovating together with partners is key.