Summary

  • Customer: major online fashion retailer
  • Product Used: SQLWays, ServiceWays
  • Source technology: MS SQL Server
  • Target technology: PostgreSQL on AWS
  • Project scope: 1.2 million lines of SQL, 8 TB of data
  • Project duration: 5 months

Client profile

The customer is a leading online fashion retailer, known for its fast-fashion business model and massive inventory. With millions of active users and a strong presence on social media, their platform handles thousands of transactions per minute and serves a global customer base.

Objective

To support its rapid international growth and handle massive traffic spikes during peak shopping seasons (like Black Friday), the company needed to migrate its core eCommerce and inventory management databases from an on-premise MS SQL Server to PostgreSQL on AWS, a more scalable and cost-effective cloud-native solution.

Challenge

The migration to PostgreSQL was driven by several critical business and technical challenges:

  • The existing MS SQL Server infrastructure struggled with performance degradation during high-traffic events, leading to cart abandonment and lost revenue.
  • Rising Microsoft SQL Server licensing and maintenance costs were becoming a significant operational burden.
  • The monolithic on-premise database limited the development team's ability to adopt a more agile microservices-based architecture.

Solution

Decision-making stage

The client required a solution that could guarantee minimal downtime and ensure the integrity of years of transactional data. After evaluating several options, they chose the Ispirer Toolkit. The decision was based on our proven track record and the toolkit's high degree of automation for the complex MS SQL Server to PostgreSQL migration path. 

The client opted for an SQLWays license combined with ServiceWays customization services to address their unique database architecture.

Data migration stage

The migration process was carried out in several structured phases, including:

  1. Assessment
  2. Initial code conversion
  3. Result analysis
  4. Custom rules development
  5. Their implementation and result validation

Throughout these stages, our team maintained close cooperation with the client's engineering department to ensure alignment with their technical and business requirements.

As a result, we achieved a 93% automation rate in the code conversion process. This outcome was achieved through the implementation of customizations designed to address the client's complex database architecture and ensure consistent performance in the new environment.

Case-insensitive comparison handling

One of the key technical challenges involved preserving case-insensitive behavior in string comparisons. Unlike Microsoft SQL Server, which often defaults to case-insensitive collation, the PostgreSQL database performs comparisons in a case-sensitive manner. 

For business, case-insensitive comparison handling ensures data consistency and accurate search results, supporting business operations that rely on flexible data matching, regardless of how text is capitalized. Users would get frustrated if their searches didn't return expected results due to capitalization mismatches.

To address this discrepancy, several strategies were evaluated and applied, including the use of the citext extension, custom nondeterministic ICU collations, and explicit normalization using the LOWER() function. The approach for each query was selected based on business logic requirements, and all decisions were aligned with the client's team expectations.

Dynamic SQL and runtime code construction

The original system made extensive use of dynamic SQL via EXEC and sp_executesql, often embedded within conditional blocks. In the PostgreSQL database, such logic was converted to use the EXECUTE command within PL/pgSQL, with dynamic components handled using the FORMAT() function. 

So, all dynamic segments, such as table names, column selections, and conditions, were parsed and restructured to retain their original runtime behavior. This transformation was performed in accordance with patterns reviewed and approved by the client's engineering team.

For the company, this means more flexibility and adaptability in database operations as well as the ability to handle varied or evolving data structures and business logic without requiring constant code changes.

Table data types and output parameters

User-defined table types used for structured data passing were replaced with composite types or temporary tables in the PostgreSQL database, with careful consideration of session scope and concurrency. Stored procedures that returned status codes using RETURN were restructured, as PostgreSQL procedures do not support return values. Instead, output parameters or auxiliary signaling logic were introduced, following a strategy discussed and validated with the client.

Such modernization improves data integrity and system robustness by replacing deprecated or incompatible features with PostgreSQL-native alternatives. We ensured reliable data transfer and clear communication of procedure outcomes within the updated database environment.

Return statuses and result sets

Procedures returning scalar values and multiple result sets required structural adaptation. PostgreSQL alternatives such as SETOF, TABLE returns, and refcursor were applied depending on the use case. Where both result sets and return codes were present, a hybrid solution was introduced, involving refcursors combined with output parameters. 

Each scenario was analyzed individually to determine the optimal conversion path while maintaining compatibility with existing application layers.

The adaptation enhances interoperability and efficiency by aligning data retrieval and status reporting mechanisms with PostgreSQL's capabilities.

Advanced SQL features: XML, JSON, hierarchical queries

Full support for XML and JSON processing was retained through PostgreSQL's native json, jsonb, and XML capabilities. Hierarchical queries, originally written using CONNECT BY or recursive CTEs in SQL Server, were rewritten using WITH RECURSIVE constructs. Special attention was given to maintaining hierarchy depth, ordering, and recursive path logic.

Thus, we ensured data accessibility and analytical power were maintained and continued to perform robust data processing and give insightful reporting.

Error handling in database migration

Finally, error handling logic based on TRY…CATCH blocks was converted into PostgreSQL's BEGIN … EXCEPTION … END structure. System functions commonly used in SQL Server for diagnostics, such as ERROR_MESSAGE() and ERROR_NUMBER(, do not have direct equivalents in PostgreSQL. However, message preservation was handled automatically through the use of PostgreSQL's native SQLERRM and GET STACKED DIAGNOSTICS, allowing for accurate capture and propagation of error context with minimal manual interventions.

This conversion improved system reliability and maintainability, fostering robust error detection, clear diagnostic messaging, and consistent application behavior during unforeseen issues.

Thanks to these solutions and other complex implementations, what would have been a manual 12+ month migration was completed in a fraction of the expected time, with data integrity, business logic, and minimal downtime. The client migrated to their new cloud-based environment with minimal disruption and maximum confidence.

Technologies

  • MS SQL Server database
  • PostgreSQL database on AWS (Amazon RDS)

Try SQLWays for free to streamline database migration

Book a demo


Benefits and Results

As a result of the project, the fashion retailer achieved strategic advantages:

  • Slashed licensing and operational costs by 40% by moving from a commercial on-premise MS SQL Server database to a managed open-source cloud solution.
  • Completed the full migration in just 5 months, beating the manual estimate by over 12 months.
  • Increased database performance by 60% during peak loads, enhancing the customer shopping experience.