Confluent, Inc. announced the Confluent First Quarter ‘22 Launch, which includes new additions to the industry's largest portfolio of fully managed data streaming connectors, new controls for cost-effectively scaling massive-throughput Apache Kafka clusters, and a new feature to help maintain trusted data quality across global environments. These innovations help enable simple, scalable, and reliable data streaming across the business, so any organization can deliver the real-time operations and customer experiences needed to succeed in a digital-first world. However, for many organizations, real-time data remains out of reach.

Data lives in silos, trapped within different systems and applications because integrations take months to build and significant resources to manage. In addition, adapting streaming capacity to meet constantly changing business needs is a complex process that can result in excessive infrastructure spend. Lastly, ensuring data quality and compliance on a global scale is a complicated technical feat, typically requiring close coordination across teams of Kafka experts.

Confluent's newest connectors include Azure Synapse Analytics, Amazon DynamoDB, Databricks Delta Lake, Google BigTable, and Redis for increased coverage of popular data sources and destinations. Available only on Confluent Cloud, Confluent's portfolio of over 50 fully managed connectors helps organizations build powerful streaming applications and improve data portability. These connectors, designed with Confluent's deep Kafka expertise, provide organizations an easy path to modernizing data warehouses, databases, and data lakes with real-time data pipelines: Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable Data lake connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data Lake Storage Gen 2, Databricks Delta Lake To simplify real-time visibility into the health of applications and systems, Confluent announced first-class integrations with Datadog and Prometheus.

With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use. This provides an easier means to identify, resolve, and avoid any issues that may occur while returning valuable time for everything else their jobs demand. To ensure services always remain available, many companies are forced to over-provision capacity for their Kafka clusters, paying a steep price for excess infrastructure that often goes unused.

Confluent solves this common problem with Dedicated clusters that can be provisioned on demand with just a few clicks and include self-service controls for both adding and removing capacity to the scale of GBps+ throughput. Capacity is easy to adjust at any time through the Confluent Cloud UI, CLI, or API. With automatic data balancing, these clusters constantly optimize data placement to balance load with no additional effort.

Additionally, minimum capacity safeguards protect clusters from being shrunk to a point below what is necessary to support active traffic. Paired with Confluent's new Load Metric API, organizations can make informed decisions on when to expand and when to shrink capacity with a real-time view into utilization of their clusters. With this new level of elastic scalability, businesses can run their highest throughput workloads with high availability, operational simplicity, and cost efficiency.

Global data quality controls are critical for maintaining a highly compatible Kafka deployment fit for long term, standardized use across the organization. With Schema Linking, businesses now have a simple way to maintain trusted data streams across cloud and hybrid environments with shared schemas that sync in real time. Paired with Cluster Linking, schemas are shared everywhere they're needed, providing an easy means of maintaining high data integrity while deploying use cases including global data sharing, cluster migrations, and preparations for real-time failover in the event of disaster recovery.