Posted inIndustry

What makes Confluent’s latest milestones exceptional? CEO Jay Kreps answers

In an exclusive media roundtable with Jay Kreps, CEO Confluent and Richard Timperlake, Senior VP, EMEA Confluent, Kreps spoke about the new launches and what it means as the next steps for the organisation.

Jay Kreps, CEO, Confluent

At the Kafka Confluent Summit , 2024 in London, the US-based data streaming company, announced the launch of the new Confluent Cloud capabilities. This makes it easier for customers to stream, connect, govern, and process data for seamless experiences and to get timely insights while keeping the data costs.

The Confluent Tableflow transforms Apache Kafka topics and the associated schemas to Apache Iceberg tables, with a single click to better supply data lakes and data warehouses. The fully managed connectors have been enhanced with new secure networking paths. This also gives in a 50 per cent lower throughout cost to enable more complete, safe, and cost-effective integrations.

The company also announced that Stream Governance is now enabled by default across all regions with an improved SLA available for Schema Registry, making it easier to safely adjust and share data streams wherever they’re being used.

In an exclusive media roundtable with Jay Kreps, CEO Confluent and Richard Timperlake, Senior VP, EMEA Confluent, Kreps spoke about the new launches and what it means as the next steps for the organisation.

What is the business impact for Confluent by adding Flink and Iceberg. Can you elaborate on the significance of these additions to your platform and how they expand your market?

Jay: The integration of Flink and Iceberg into our platform signifies a pivotal evolution for Confluent. We view it not just as an expansion but as a strategic shift towards becoming a multi-product company. This journey began before our efforts around Flink and Iceberg, with the introduction of fully managed connectors allowing customers to seamlessly plug into various systems for data streams.

Now, with Flink and Iceberg, we’re taking it a step further. These additions represent a broader suite of capabilities catering to the diverse needs of our customers in the realm of data streaming. Flink provides a robust stream processing engine, enabling customers to build powerful applications that process data in real-time.

It complements Kafka, which serves as the core stream of data. Iceberg, on the other hand, facilitates the seamless integration of streaming data with structured data stored in cloud storage, providing a comprehensive solution for managing both streaming and batch data. By incorporating these technologies, we’re not just expanding our capabilities but also addressing the growing complexity of modern data architectures.

These additions enable us to cater to a broader range of use cases, attracting new customers and empowering existing ones with enhanced capabilities.

Jay Krepes, on stage talking about the new launches by Confluent at the Kafka Summit at London.

Whether it’s financial services leveraging real-time data for trading algorithms or enterprises harnessing streaming analytics for IoT applications, our platform now offers the versatility to meet diverse business needs. Ultimately, it’s about providing value to our customers and enabling them to unlock the full potential of their data assets.

Moreover, by offering a comprehensive suite of data streaming solutions, we’re not just tapping into existing markets but also paving the way for new opportunities.

The demand for real-time data processing and analytics continues to soar across industries, and our expanded platform positions us at the forefront of this growing market. It is a significant stride towards fulfilling our vision of empowering organisations with cutting-edge data streaming capabilities.

What long-term impact do you foresee from these additions on Confluent’s overall business? Do you anticipate introducing more products of a similar nature in the future?

Firstly, these additions serve as a substantial tailwind for Confluent’s growth trajectory. By expanding our suite of capabilities, we’re not only deepening our value proposition to existing customers but also attracting new ones. This translates to increased revenue streams and solidifies our position as a leader in the data streaming space.

The ease and versatility afforded by these additions are poised to drive further adoption of our platform across industries. As organisations increasingly recognise the importance of real-time data process and analytics, they’ll turn to solutions like ours that offer a comprehensive suite of tools to meet their evolving needs. This, in turn, fosters long-term customer loyalty and propels our business forward.

As for future product development, while I can’t provide specifics, I can certainly affirm our commitment to innovation and meeting the evolving demands of our customers.

The introduction of Flink and Iceberg is just the beginning of our journey towards building a robust ecosystem of data streaming solutions. We’re continuously exploring opportunities to enhance our platform, whether through internal development or strategic partnerships.

Jay Kreps, CEO, Confluent

With the increasing competition in the data streaming space, particularly from emerging players and established enterprise application providers how do you plan to maintain your competitive edge?

While competition certainly exists, we view it as more of a validation of the immense potential of this space rather than a cause for concern. Confluent has established itself as a leader in the data streaming area, thanks to our relentless focus on innovation, customer success, and the strength of our platform.

When it comes to competition from enterprise application providers, we see it as largely complementary rather than adversarial. These companies often serve as our customers, leveraging our data streaming capabilities to enhance their own offerings.

Moreover, the trend towards open platforms for data presents a significant opportunity for collaboration. As more companies recognise the importance of seamless data integration and real-time processing, there’s a growing demand for interoperable solutions.

Confluent, with its open-source roots and commitment to standards like Kafka, is well-positioned to capitalize on this trend.

That being said, we’re not complacent about the competitive landscape. We understand the importance of continuous innovation and differentiation to maintain our competitive edge.

This involves not only enhancing our existing offerings but also exploring new avenues for growth and expansion. Whether it’s through strategic partnerships, targeted product development, or superior customer service, we remain steadfast in our commitment to staying ahead of the curve.

So, in summary, while competition in the data streaming space is inevitable, we see it as an opportunity for collaboration, innovation, and ultimately, mutual growth. Our focus remains on delivering value to our customers and solidifying our position as the leading provider of data streaming solutions.

What is the impact on enterprises in terms of how this affects the way they operate? What are the organisational changes resulting from these trends?

Our perspective at Confluent is more about connecting everything and integrating it rather than replacing existing data platforms entirely. We have a very open view of the enterprise ecosystem. Our vision revolves around facilitating connections and integration rather than disruption. It’s crucial to understand that the distinction between operational applications and analytical data processing isn’t as clear-cut as it used to be.

In many organisations, different teams manage each, but there’s a growing middle ground where applications fall somewhere in between. Take AI applications, for example—they involve both complex data processing and operational functions like customer interaction.

Bridging this gap requires a new approach that unifies streaming interaction across analytical and operational realms. Regarding the personas building these applications, there’s a shift towards more accessible tools.

Traditionally, building machine learning applications required specialised skills, but newer tools make it more accessible to a broader set of personas. This convergence of analytics and operations presents exciting opportunities but also raises questions about organisational responsibilities and technological stacks.

Could you delve deeper into the organizational implications of this shift? Do you foresee any challenges in aligning organizational structures with this evolving technological landscape?

The emergence of hybrid applications blurs the lines between traditional roles within organisations. For instance, building machine learning applications requires separate teams of software engineers and domain experts.

However, newer tools democratise access to machine learning, making it more accessible to traditional software engineers with minimal machine learning expertise. This shift in responsibilities raises questions about organizational structures and the skill sets required.

It’s a balancing act between leveraging specialised expertise and fostering cross-functional collaboration. As for challenges, aligning organisational structures with evolving technology landscapes can be complex.

It requires a cultural shift towards embracing change and fostering a culture of innovation and collaboration. Organizations need to rethink traditional silos and encourage interdisciplinary collaboration to harness the full potential of hybrid applications.

How do organisations leverage technologies like Kafka and Flink to streamline data ingestion and processing? Specifically, how does this integration impact data warehousing and analytics?

Integrating streaming data into enterprise analytics systems offers numerous benefits, primarily in terms of real-time insights and improved data freshness. Technologies like Kafka and Flink play a crucial role in streamlining data ingestion and processing.

Kafka serves as a scalable, fault-tolerant messaging system that reliably captures and streams data in real-time. Flink, on the other hand, provides powerful stream processing capabilities, allowing organizations to analyse and transform streaming data in real-time.

By leveraging these technologies, organisations can reduce latency and improve data freshness, enabling faster decision-making and better insights. However, integrating streaming data into existing data warehousing and analytics systems poses challenges, particularly around data governance and infrastructure scalability.

Organisations must ensure proper data governance practices and invest in scalable infrastructure to support the influx of streaming data effectively.

Kafka and Flink play complementary roles in enabling real-time data processing and analytics. Kafka serves as a distributed messaging system that enables the capture and streaming of data in real-time. It acts as a reliable, fault-tolerant data pipeline that ingests and streams data from various sources.

Flink, on the other hand, provides powerful stream processing capabilities, allowing organisations to analyse, transform, and aggregate streaming data in real-time.

Together, Kafka and Flink enable organisations to build robust real-time analytics solutions that deliver actionable insights at scale.

Some common use cases where Kafka and Flink excel include real-time fraud detection, dynamic pricing, predictive maintenance, and personalised recommendations.

By leveraging these technologies, organisations can gain real-time insights into their data, enabling faster decision-making and better business outcomes.

What are some key challenges associated with real-time fraud detection, and how do organisations overcome them?

Real-time fraud detection using Kafka and Flink involves capturing, processing, and analysing streaming data to identify suspicious patterns and anomalies in real-time. Organisations ingest data from various sources, including transaction logs, user behaviour, and external data feeds, into Kafka for real-time streaming.

Flink then processes and analyses this data in real-time, applying machine learning algorithms and business rules to detect fraudulent activities. Key challenges associated with real-time fraud detection include data volume, velocity, and variety, as well as model drift and false positives.

Organisations overcome these challenges by leveraging scalable infrastructure, implementing robust data governance practices, and employing advanced analytics techniques.

Additionally, organisations continuously monitor and update their fraud detection models to adapt to changing fraud patterns and minimize false positives. By addressing these challenges, organizations can effectively detect and prevent fraud in real-time, protecting their businesses and customers from financial losses and reputational damage.

What is the impact of combining operational analytics and core data on enterprises? How does this convergence reshape how organizations operate, innovate, and deliver value to customers? Are there any emerging trends you anticipate will further accelerate this convergence?

Traditionally, operational systems and analytical systems were separate, with different technology stacks supporting each. However, as organisations embrace digital transformation, these boundaries are blurring. Operational applications are increasingly incorporating analytical capabilities to improve decision-making and enhance customer experiences.

This convergence enables organisations to derive real-time insights from operational data, driving faster and more informed decision-making. Additionally, it fosters innovation by enabling organisations to develop AI-driven applications that leverage both historical and real-time data.

Looking ahead, we anticipate further acceleration of this convergence with the proliferation of AI and machine learning technologies. Organisations will increasingly leverage AI-driven applications to automate processes, personalise experiences, and drive business outcomes.

This will further blur the lines between operational and analytical systems, driving organizations to adopt more integrated data management approaches. Overall, the convergence of operational analytics and core data reshapes how organizations operate, innovate, and deliver value to customers, driving competitive advantage and enabling growth in an increasingly data-driven world.

We believe in creating a unified data platform that seamlessly connects operational and analytical systems, enabling organizations to harness the full potential of their data. Unlike some vendors who advocate for replacing existing data warehouses or data lakes, Confluent takes a more open approach, focusing on integration and interoperability. Our goal is to facilitate the flow of data across the enterprise ecosystem, ensuring that data can be easily accessed, processed, and analysed by various applications and teams.

At the core of Confluent’s approach is Apache Kafka, a distributed event streaming platform that serves as the central nervous system for real-time data movement. Kafka enables organizations to capture, process, and distribute data streams in a scalable, fault-tolerant manner, making it ideal for both operational and analytical use cases.

Moreover, Confluent extends beyond Kafka to offer a comprehensive suite of tools and services, including connectors, stream processing with Apache Flink, schema management, and data governance capabilities. These components work together to empower organizations to build real-time data pipelines that span from edge to cloud.

By bridging the gap between operational and analytical systems, Confluent enables organisations to unlock new possibilities for innovation, agility, and efficiency, driving competitive advantage and enabling growth in an increasingly data-driven world.