Customer Story

Fortune 100 telecommunications company seamlessly migrates from Teradata to Amazon Redshift

Business Needs:

A Fortune 100 broadband connectivity company and cable operator wanted to make a strategic shift to the cloud with the following objectives:

  • Enhance scalability: Ability to handle rapidly growing volumes of data, manage peaks in traffic, and run new business use cases with greater ease
  • Reduce costs: Lower Teradata licensing costs and maintenance overheads
  • Improve query performance: Ability to query raw data and generate business insights faster
  • Realize a unified view: Ability to track workloads across the end-to-end data transformation journey using a single platform
  • Simplify management: Overcome the management complexities of Teradata and Informatica by leveraging cloud-based services like Amazon Redshift, S3, and Athena
  • Seamless integrations: Integrate with other cloud-native services to load data and visualize insights
  • Automate workflows for CI/CD: Ability to swiftly move changes from the development environment to staging and production environments

Solution:

To meet the customer’s business requirements, we built an end-to-end data flow solution leveraging the following key components:

  1. Amazon Redshift: Scalable, cloud-native data warehouse to collect and store data for all workloads, support query requirements, and accommodate varying business use cases
  2. Amazon S3: Cloud-native storage for the gathered data feeds
  3. Amazon Athena/EMR/Redshift: Ad-hoc query engine to query data feeds directly and generate insights
  4. Gathr, the all-in-one data pipeline platform, was used to:
    • Configure ETL flows, ingest data, perform full load, incremental load, and CDC (SCD type 1 and 2) from Teradata to Redshift
    • Transform and persist data feeds to Amazon S3 with an auto-scalable execution engine
    • Enable one-time migration by directly loading Teradata tables into Amazon Redshift
    • Validate data post migration
    • Provide a unified view of the complete workflow
    • Set up CI/CD for upgrading ETL flows and moving them seamlessly from one environment to another
    • Schedule and trigger the data flow process at a pre-configured frequency

Business benefits

  • Ability to handle 30 billion rows and easily scale to manage fluctuating production loads
  • 20% better query performance for analytical queries
  • Support for 40% more analytics users across the enterprise
  • 15% increase in the number of queries executed, enabling users to unlock new business opportunities
  • Ability to perform ETL with minimal hand-coding using pre-built operators
  • 360-degree visibility with a unified, configurable view of all workloads’
  • Lower licensing and infrastructure cost

    Gathr Data Inc will use the data provided here in accordance with our Privacy Policy.

      Gathr Data Inc will use the data provided here in accordance with our Privacy Policy.

      Meet Gathr.

      The only all-in-one data pipeline platform

      • One platform to do it all - ETL, ELT, ingestion, CDC, ML
      • Self Service, zero-code, drag and drop interface
      • Built-in DataOps, MLOps, and DevOps tools
      • Cloud-agnostic and interoperable
      • Data
        Ingestion

      • Change Data
        Capture

      • ETL/ELT Data
        Integration

      • Streaming
        Analytics

      • Data
        Preparation

      • Machine
        Learning

      Expert Opinion

      Gathr is an end-to-end, unified data platform that handles ingestion, integration/ETL (extract, transform, load), streaming analytics, and machine learning. It offers strengths in usability, data connectors, tools, and extensibilty.


      Customer Speak

      Gathr helped us build “in-the-moment” actionable insights from massive volumes of complex operational data to effectively solve multiple use cases and improve the customer experience.


      IN THE SPOTLIGHT

      Learning and Insights

      Stay ahead of the curve

      Q&A with Forrester

      Building a modern data stack: What playbooks don’t tell you

      Blog

      4 common data integration pitfalls to avoid

      Blog

      Why modernizing ETL is imperative for massive scale, real-time data processing

      Fireside Chat

      Don’t just migrate. Modernize your legacy ETL.