Stuck in the past: How SAP’s data policies undermine modern cloud strategies 

In today’s fast-paced digital landscape, organisations are increasingly relying on modern cloud-native platforms like Google BigQuery, Azure Fabric, Snowflake, and Databricks to power their data-driven decision-making. These platforms offer scalability, flexibility, and advanced analytics capabilities that are crucial for staying competitive. Yet, SAP seems stuck in the past, stubbornly refusing to acknowledge this multi-platform reality. I bet even SAP does not run all its analytics from a single (SAP) platform (and I would love to see the evidence to prove that 😉). By denying customers seamless ways to export data to these platforms, SAP’s approach is creating significant challenges for its users and driving them towards third-party solutions. 

SAP’s outdated approach to data integration 

SAP’s integration tools are ill-suited for the needs of the modern data stack. Let’s have a look at the options:  

– SAP Landscape Transformation Replication Server (SLT) 
As a trigger-based replication server, it is theoretically a suitable candidate for integration. However, it is mainly geared towards SAP-to-SAP replication and, outside of SAP, it can only land data in legacy databases like DB2, SQL Server, or Oracle. None of these can be classified as ‘modern data platforms’. 
– SAP Data Services (BODS, BODI) 
SAP Data Services is built on a 30(!)-year-old client/server architecture, unsuitable for cloud environments, and lacks meaningful data streaming capabilities. 
– SAP DataSphere Premium Outbound 
SAP DataSphere Premium Outbound, while technically capable of provisioning data, is far too complex and expensive to be practical if used solely for data export into a third-party cloud environment. And despite the label of ‘Premium’ the supported target destinations is still rather limited

Limitations like this make SAP’s own solutions impractical for organisations that need quick and reliable ways to integrate their SAP data with cutting-edge cloud ecosystems. 

Customers are turning to third-party alternatives 

For me as a consultant, it is frustrating to see customers wasting time trying to make data extraction from SAP work following SAP’s biased guidance. In recent months, I have seen a global enterprise struggling for months with SLT/Data Services and not getting anywhere. Another global enterprise has decided to abandon their SAP DataSphere project after nearly two years and underwhelming results. SAP claims to put the customer first, but that only seems to be true if the customer is willing to exclusively use SAP’s products.  

Faced with SAP’s shortcomings, many organisations are looking elsewhere. Third-party tools like Fivetran, SNP Glue, Theobald and Qlik Replicate are becoming the go-to options for extracting SAP data and moving it to modern cloud platforms. These solutions are not only more user-friendly and efficient but also offer robust support for a wide range of cloud-native systems. 

What these tools have in common is their ability to do what SAP’s own offerings cannot: make it simple and seamless to integrate SAP data with platforms like BigQuery, Snowflake, or Databricks. 

At the right of this text is a table with some great 3rd party tools which all blow SAP out of the water when it comes to replicating data to a cloud data platform (there is little evidence published, but I am happy to stand by this claim as I have seen it many times). If you are looking for guidance on which tool is best for your organisation please get in touch as Snap Analytics’ methodology for this is efficient and proven successful. It looks at your complete landscape for analytics (source- and target destinations), the impact on your SAP ERP workload, your in-house skills, costs and other factors).   

The impact on customers 

SAP’s refusal to adapt to the needs of the modern data landscape is hurting its customers. Organisations that rely on SAP for their core business processes are being forced to choose between two suboptimal options: 

  1. Stick with SAP’s in-house tools: This often results in higher costs, increased complexity, and slower time-to-value. 
  1. Invest in third-party solutions: While these tools are effective, they add additional vendors, integration layers, and costs to an already complex IT landscape. 

In either case, customers bear the burden of SAP’s unwillingness to acknowledge the reality of multi-cloud and hybrid environments. The inefficiencies created by SAP’s closed-off approach can delay data-driven decision-making, inflate expenses, and limit the ability to leverage advanced analytics capabilities. 

Why SAP needs to change 

The reality is clear: the enterprise data landscape is no longer confined to monolithic systems or single-vendor ecosystems. Organisations are building diverse, cloud-centric data architectures. SAP should recognise this, and put the customer first again. This means embracing openness and interoperability, rather than continuing to push inferior, SAP-centric solutions. 

By providing robust, user-friendly mechanisms for exporting data to non-SAP platforms, SAP could reclaim the trust of its customers and cement its place as a leader in enterprise data management. Customers are happy to pay for a premium service to get data out of SAP ERP and into the cloud – as long as that service is truly premium in the technical sense, not just in name and price.  
Until then, customers will seek a better solution from a 3rd party provider.  

Why Boutique Consultancies Outshine Global SIs: An Insider’s Perspective

Let me start with a confession: I’ve spent the past 20 years navigating the consulting world, working with and against the titans of the industry – the global system integrators (SIs). And here’s the punchline: time and time again, boutique consultancies have stolen the show.

Now, before I get accused of bias (or sour grapes), let’s unpack why this is not just my opinion but a reality many clients experience firsthand.

1. Clients Are King, Queen, and Court Jester

For a boutique consultancy, every client matters. It’s simple maths: when you’re a smaller player, losing a client isn’t just a line item in a quarterly report; it’s personal. The stakes are higher, and so is the effort.

Compare that to the global SIs. For them, your business might be one of hundreds or thousands. Sure, they’ll wine and dine you during the sales process, but once the ink dries on the contract, you might find yourself relegated to a team halfway around the world with an account manager who remembers your name only because it’s in the email signature. At a boutique consultancy, your success is their survival—a much stronger incentive to deliver.

2. Leadership Isn’t Just a PowerPoint Slide

Boutique consultancies have another secret weapon: engaged leadership. When the CEO or partners are involved in hiring, quality isn’t just a buzzword; it’s a hands-on reality. They’re not just looking for someone who checks boxes; they’re looking for people who will embody their values and ethos.

Global SIs? Let’s be honest: their leadership might be more focused on shareholder reports and the next big acquisition. Hiring is often a conveyor belt operation managed by HR, with less focus on finding the perfect fit for each role. And while there are talented people at every level in an SI, it’s hard to match the consistency you’ll find at a boutique, where the leadership knows every team member’s strengths and weaknesses.

3. The Boutique Outperformance Effect

Over my two decades in this game, I’ve seen it repeatedly: boutiques outperform. Why? Because they have to. A boutique’s reputation is everything. One bad project can do serious damage, so they’re highly motivated to get it right the first time.

Contrast this with global SIs, who often rely on their brand to smooth over bumps. (“Yes, we’re three months late and 20% over budget, but look at our logo!”) In boutiques, the teams are smaller, nimbler, and more cohesive. They’re less likely to get bogged down in bureaucracy and more likely to focus on solving problems.

4. Real People, Real Conversations

With boutiques, you’re dealing with people who genuinely care about your business. There’s no labyrinth of escalation paths or “Let me loop in my colleague from XYZ department.” You speak directly to decision-makers. Your feedback is acted upon, not filed away in a system with a ticket number.

Better yet, the people pitching your project are often the same people delivering it. No “pitch the A Team” and deliver the “B Team.” With boutiques, what you see is what you get: experienced professionals who are fully invested in the success of your project.

5. Humor and Humility

Finally, boutiques tend to be more relatable. They’re less about the pomp and more about rolling up their sleeves. They’re the consultants who will laugh with you, stress with you, and celebrate with you when the project succeeds. In a boutique, you’re not just another client; you’re a partner.

The Bottom Line

The global SIs have their place. They’re good for massive, multi-year, multinational endeavors that require armies of consultants. But if you’re looking for agility, attention, and authenticity, boutique consultancies are where the magic happens.

The business means more to them, their leadership is closer to the ground, and they deliver with a passion that’s hard to replicate in a behemoth.

So the next time you’re deciding between David or Goliath, remember: sometimes, the slingshot is mightier than the sword.

7 data-related considerations for your first quantum steps 

As quantum computing continues to evolve, business leaders are starting to explore how this groundbreaking technology might impact their industries. 

This was very much in evidence at the National Quantum Technologies Showcase in London, where I saw an impressive number of people all working in real world applications of quantum technology. It was amazing to see manufactured components that could trap and manipulate matter at the quantum scale. Particularly when you think that there are more molecules in a litre of water than there are stars in the known universe…   

If you’re a CFO, CIO, CDO, or CEO wondering how to begin understanding quantum computing, this framework offers 7 critical points to guide your thinking as we prepare to enter 2025 – the year of quantum! 

Quantum computing is on the cusp of transitioning into mainstream use, and while it may not change everything overnight, the specific areas it will transform will do so dramatically. 

From solving complex optimisation problems in seconds to redefining data security protocols, quantum computing will challenge the way businesses approach many of their most pressing issues. 

Read on to explore 7 key considerations that should be at the forefront of your thinking as you start your quantum journey. 

1. Let’s get it out of the way: The impact on encryption

Encryption is often the first thing that comes to mind when discussing the potential impact of quantum computing. With current cryptographic methods (such as RSA encryption) relying on the difficulty of factoring large numbers, quantum computers pose a serious threat by significantly reducing the time needed to crack these codes. 

For digital and data leaders, this creates a critical imperative: by 2030, quantum computers with sufficient numbers of qubits and noise correction may be available to operate this new class of code-breaking, meaning quantum-safe encryption must be part of your cybersecurity strategy. Failing to plan for quantum-safe encryption could expose your company to significant security risks, including data breaches and cyberattacks that could undermine customer trust. 

By securing your data now, you not only protect your company from future vulnerabilities but also stay ahead of competitors who may delay taking action. 

2. Education and awareness 

Whilst you can buy or access quantum hardware already from companies like IBM, D-Wave, Google and Microsoft, they are currently quite “small” (not many qubits), and more importantly they are still much more prone to errors than classical computers. This is why error correction is such an important field of research in Quantum Computing, and why Google’s  Willow is an important breakthrough. 

So rather than trying to dive in on a pilot which is likely to give error-strewn results even if the functionality behind it is rock solid, one of the most valuable steps to undertake right now is education and awareness. 

The quantum realm is unlike the world we experience every day, so demystifying some of the important concepts like superposition and entanglement, which underpin the quantum computing advantage, will help others in your organisation make an informed decision when the time is right to take the plunge and invest. 

3. Where will quantum computing make a difference? Identifying use cases

Quantum computing won’t replace traditional computers – it will complement them by solving problems that are currently intractable (extremely difficult or resource intensive to solve). For business leaders, identifying the right use cases for quantum computing is essential. Some of the most promising areas include: 

  • Optimisation Problems: Logistics, supply chain management, and financial optimisation are some examples where quantum computers can solve highly complex problems exponentially faster than classical systems. For example route optimisation (travelling salesperson problem) and resource allocation and scheduling in manufacturing processes. 
  • Drug Discovery and Material Science: Quantum computing has the potential to revolutionize industries such as pharmaceuticals by simulating molecular structures and reactions at a level of detail far beyond what’s possible with classical computers. 
  • Artificial Intelligence and Machine Learning: Quantum computers can enhance AI by handling larger datasets and more complex calculations such as the k-nearest neighbour algorithm, potentially improving pattern recognition, predictive analytics, and other AI-driven processes 

By identifying areas where quantum computing can deliver transformative value, you can begin planning strategically. 

A sound, cloud-based data platform will be critical for testing and implementing these use cases, as it provides the scalability, security, and computational power needed to harness quantum computing’s potential. 

4. What are the benefits? 

The potential benefits of quantum computing can significantly impact your company’s competitive edge, particularly in sectors requiring massive computational power. Some key benefits include: 

  • Increased Efficiency: For companies in consumer goods, manufacturing, finance, logistics, or energy, quantum computing promises breakthroughs in optimisation, which could dramatically lower operational costs and improve efficiency. 
  • Accelerated Innovation: The ability to simulate and test complex systems with high precision can shorten the time to market for new products and services, especially in sectors like pharmaceuticals and material sciences. 
  • Improved Decision-Making: Quantum algorithms can process vast datasets much more efficiently than classical algorithms, enabling faster and more accurate decision-making in real-time. 
  • Sustainability: for companies and applications requiring huge amounts of energy to run classical compute processes on R&D and AI training, for example, quantum computers could eliminate significant amounts of energy. However, there is a caveat – this has yet to be definitively proven, and of course energy savings could be offset by an “arms race” of doing more and more quantum calculations. 

In order to achieve these benefits, a robust, cloud-based data platform is needed to facilitate the integration of quantum-powered insights into your business operations. 

5.What are the risks? 

As with any emerging technology, there are risks associated with quantum computing. These include: 

  • Security Risks: If quantum-safe encryption is not implemented in time, quantum computing could expose sensitive data to breaches. 
  • Technology Readiness: Quantum computing is still in its early stages, is prone to errors, and the technology may not be ready for widespread, enterprise-scale implementation for several years. There may be a gap between early experimentation and practical, commercial use cases. You could waste money! 
  • Talent Shortage: As demand for quantum expertise increases, the shortage of skilled quantum professionals could become a bottleneck, driving up costs and slowing progress. 

Being proactive in your planning can mitigate these risks. Enabling some of your people to educate and train themselves in real or simulated quantum environments will prepare your business for exploiting quantum opportunities as they arise. 

6. What is the effect on existing infrastructure? 

The integration of quantum computing with existing IT infrastructure will require careful planning. Quantum computers won’t replace traditional computing systems – they will augment them. This means that organisations will need to build hybrid architectures that can seamlessly integrate quantum computing with classical computing resources, although .for a lot of companies, quantum computing will simply be part of the cloud compute service they use, and may not require physical infrastructure changes. 

A sound, cloud-based data platform offers the flexibility to experiment with quantum resources while maintaining the performance and reliability of existing IT systems. Additionally, these platforms provide scalable storage and processing power, which may be needed for handling the data flows generated by quantum experiments, especially those involving comparisons with classical compute. 

7. What resources and skills are needed, and how can they be accessed? 

Building a quantum-ready organisation requires the right combination of resources, infrastructure, and talent. Key areas of focus include: 

  • Talent Acquisition: The demand for quantum scientists, engineers, and software developers will increase. Ensuring your team has access to the necessary talent through training or partnerships will be critical. 
  • Cloud-Based Quantum Tools: Many big tech companies, such as IBM, AWS, Google, and Microsoft, are already offering quantum computing services. These tools will be invaluable for businesses looking to experiment with quantum computing without investing in dedicated hardware. 
  • Collaboration with Research Institutions: Partnering with universities or dedicated institutions like the National Quantum Computing Centre or Digital Catapult in the UK can provide the expertise needed to explore quantum use cases and stay on the cutting edge. 

Building a strong ecosystem of resources and skills will be essential for navigating the quantum landscape effectively – the only question is timing. For most companies who haven’t yet started their quantum strategy, building awareness and education is a great first step. 

Conclusion 

Quantum computing is still in the early stages of its journey to mainstream adoption, but the clock is ticking for businesses to begin preparing. 

By 2030, quantum computing will likely have a profound impact on a huge range of processes, from cybersecurity to logistics operations and more.  

Education and awareness is the best first step for those who haven’t yet started their quantum strategy. 

Investing in a robust, cloud-based data platform is essential for preparing your organisation to integrate quantum computing. With the right infrastructure, you can accelerate your journey into this transformative technology, opening the door to new efficiencies, innovations, and competitive advantages. 

If you’d like to explore this in more detail, get in touch to arrange a discussion. 

4 Key EU AI Act Insights for Data Engineers 

Gaining approval in May 2024 the buzz around the EU AI Act is all about addressing the growing concerns over the ethical, legal, and social impacts of AI technologies. This framework is all about providing legal clarity and consistency for businesses in the AI space. It applies to all member states of the European Union. Additionally, its scope extends beyond the EU in certain cases: 

  • It applies to providers and users of AI systems located within the EU, regardless of where the AI system is developed. 
  • It also applies to providers outside the EU if their AI systems are used in the EU market or affect EU citizens. 

For data engineers, this means helping businesses operate confidently in the AI space by following clearer guidelines for development. It’s all about building investor trust and sparking innovation while ensuring that high-quality data and solid data governance practices are at the forefront. By sticking to these regulations, you can help tackle issues like algorithmic bias and privacy risks, paving the way for responsible AI development that benefits society as a whole. 

Key points of the AI Act 

  1. Risk-Based Approach 
  1.  Guardrails for AI Systems  
  1. Transparency and Accountability 
  1. Conformity Assessment  

Risk-Based Approach 

As the landscape of AI technology evolves, so do the regulations that govern it. Under the new EU AI Act, companies are now required to notify users when they interact with specific AI systems—think chatbots, emotion recognition, and biometric categorization. For essential service providers in sectors like insurance and banking, this means conducting thorough impact assessments to understand how AI affects the fundamental rights of the public. 

Consequently, data engineers roles will be crucial in building infrastructure that is complaint with the updated regulations. There’s a renewed emphasis on keeping meticulous documentation of data sources, preprocessing steps, and model architectures. Additionally, development of innovative algorithms to track user interactions with these AI features and integrated notification prompts that keep users informed will be required. 

Guardrails for AI Systems 

Lawmakers have determined that Tech companies creating those cutting-edge AI systems now have some extra hoops to jump through. Going forward they are going to be required to disclose the security and energy efficiency of their creations. To determine which companies are subject to stricter regulations, they will be categorised based on the computing power necessary to train their systems. Businesses will be required to present comprehensive documentation to allow for transparent evaluation of the AI systems’ risks, performance, and alignment with regulatory requirements. This detailed technical documentation would include: 

  1. Training and Dataset Information: Descriptions of the datasets used for training, including their size, sources, and representativeness  
  1. Computing Power and Resources: The technical specifications of the resources utilized, which may include FLOPs (floating-point operations) or computational energy consumption if relevant for assessing systemic risks  
  1. Capabilities and Limitations: Detailed explanations of the model’s design, intended use, performance, and known risks. 
  1. Conformity Documentation: A declaration of compliance with EU standards and a CE marking for high-risk AI systems 

Initially, concerns arose regarding whether these regulations would stifle innovation. However, lawmakers have granted a two-year implementation period for the rules spanning 2024–2025, with the AI usage ban taking effect after six months and compliance required for companies developing foundation models within one year. High-risk systems will have an extended timeline to meet the requirements, as the obligations for these systems will take effect after 36 months. These measures aim to safeguard against potential issues stemming from unregulated development.  

However, responsible AI practices may introduce performance and scalability considerations.  Data scientists and engineers will need to allocate additional resources towards developing solutions that align with compliance standards.  Techniques such as differential privacy or federated learning, which are used to protect sensitive data and promote collaboration while preserving privacy, can impact the performance and scalability of AI systems. Data engineers are now tasked with optimising system performance while maintaining responsible AI practices. 

Transparency and Accountability 

A new European AI office, consisting of a scientific panel of independent experts, will be established to oversee compliance, implementation, and enforcement. The AI act states non-compliance may result in fines ranging from about 1.5% to 7% of global sales turnover, depending on the severity of the offense. Therefore, importance is placed on the explainable AI aspect owing to lawmakers viewing this measure as crucial, not just for safeguarding the interests of stakeholders and end-users but also for safeguarding the developers themselves. 

With the spotlight on explainability, we might find ourselves diving deeper into interpretability methods for  machine learning models. As a result, data engineers may need to spend more time and effort understanding and incorporating interpretability methods into their machine learning models, which can increase the overall development time and complexity. Balancing model interpretability with predictive accuracy will be their new challenge. They may need to make trade-offs between incorporating interpretability techniques and maintaining high levels of accuracy, which can be a challenging task. 

As explainability becomes a key focus in AI development, companies are facing the challenge of incorporating interpretability methods into machine learning models without compromising performance. Data engineers will need to balance the time and resources spent on ensuring models are interpretable with the need to maintain high predictive accuracy. Businesses stand the risk of non-compliance with regulatory standards like the EU AI Act, loss of stakeholder trust, and ethical concerns around model biases. To mitigate these, businesses must adopt transparent AI practices, using tools like SHAP or LIME, conducting regular bias audits, and considering hybrid approaches that balance explainability with model performance. Prioritizing explainability not only ensures compliance but builds trust and mitigates ethical risks, making it a strategic necessity for organizations navigating the evolving AI landscape. 

Conformity Assessment  

Given their potential for significant harm to health, safety, fundamental rights, environment, democracy, and the rule of law, the EU has tightened the reins on high-risk AI systems prior to market release or deployment. These systems must assess and mitigate risks, maintain usage logs, ensure transparency and accuracy, and incorporate human oversight. To add to the accountability, the public can now voice their concerns about these systems and demand explanations for decisions that affect their rights. Already some AI applications, such as specific biometric categorization systems and practices like untargeted scraping of facial images or social scoring, have already been prohibited in the EU.

Data engineers will need to implement thorough risk assessments, maintain documentation of model design and training data, and use tools for continuous monitoring and compliance checks to ensure ongoing conformity. For businesses to stay ahead of the game, it is advised to invest in ongoing training and education to navigate these shifting regulations. Close collaboration with data scientists, policymakers, and ethicists is also recommended to mitigate errors. Only by embracing these challenges and leading the charge toward a responsible AI future can we all make AI safer for everyone.  


Useful Links & Sources: 

European Parliament. (2023, December 19). EU AI Act: First regulation on artificial intelligence. EU AI Act: first regulation on artificial intelligence | Topics | European Parliament (europa.eu) 

European Parliament. (2024, March 13). Artificial intelligence act: MEPs adopt landmark law. 

Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament (europa.eu) 

EU Artificial Intelligence Act. (n.d.). Article 13: Transparency and Provision of Information to Deployers. https://artificialintelligenceact.eu/article/13/ 

EU Artificial Intelligence Act. (n.d.). Article 53: Obligations for Providers of General-Purpose AI Models. https://artificialintelligenceact.eu/article/53/ 

European Council. 2024, May 21. Artificial intelligence (AI) act: Council gives final green light to the first worldwide rules on AI. https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/