Skip to content
Go back

My Journey at 0xPPL: Building High-Performance Systems as an Early Engineer

Table of contents

Open Table of contents

Introduction

From February 2024 to November 2024, I had the incredible opportunity to work as an Early Engineer (OG Team) at 0xPPL, a social media platform founded by a Rippling co-founder. I was part of one of India’s top engineering teams, composed of 7 ICPC World Finalists (including 3 India Rank-1s).

During my time there, I became the Top 4 Backend Contributor with 600+ PRs merged in 11 months and had full ownership of multiple large-surface area projects that significantly impacted user engagement, system performance, and overall product experience.

This post is a comprehensive overview of the major systems I built and the technical challenges I solved.

1. Identity Unification Using Disjoint Set Union (DSU)

The Challenge

Users on 0xPPL connected from multiple social platforms (Twitter, Farcaster, Lens) and crypto wallets. However, the same person often had fragmented profiles across these platforms, making it difficult to:

The Solution

I designed and implemented a user identity resolution system leveraging Disjoint Set Union (DSU) algorithms to unify fragmented user profiles.

Key Features:

Impact:

Technical Deep Dive

The DSU algorithm efficiently handles the “friend of a friend” relationships across platforms. When we detect that two accounts belong to the same person (via shared wallet addresses, ENS names, or other signals), we merge them in the DSU structure.

This allows O(α(n)) time complexity for union and find operations, where α is the inverse Ackermann function, making it nearly constant time in practice.

2. High-Throughput Background Job System (40K Jobs/Minute)

The Challenge

Our background job processing system was handling around 600-700 jobs per minute, but processing times were taking 30+ minutes. We needed to scale this dramatically for features like notification generation, social graph updates, and data aggregation.

The Solution

I architected an Auto-Scalable Recursive Binary Splitting Range Scheduler that could handle 30,000-40,000 jobs per minute — a 60x performance improvement.

Key Features:

Impact:

Architecture

The scheduler works by taking a range of work items (e.g., user IDs 1-100000) and recursively splitting it into smaller ranges. Each worker processes a range, and if the range is still too large or the worker is overloaded, it further splits the range and delegates to other workers.

Initial: [1 - 100000]
Split:   [1 - 50000] [50001 - 100000]
Split:   [1 - 25000] [25001 - 50000] [50001 - 75000] [75001 - 100000]
...and so on

The system monitors health metrics and can prune branches early if certain conditions are met, avoiding unnecessary work.

3. Deadlock-Resilient Transaction Processing

The Challenge

Our system was experiencing frequent deadlocks during bulk database operations, especially when creating or updating large batches of records. These deadlocks required manual intervention and caused system instability.

The Solution

I designed a deadlock-aware bulk create/update transaction wrapper that automatically detects and resolves deadlocks.

Impact:

How It Works

The wrapper detects deadlock exceptions from PostgreSQL and automatically retries the transaction with:

  1. Exponential backoff
  2. Record sorting to ensure consistent lock ordering
  3. Transaction isolation level optimization
  4. Batch size adjustment based on contention

4. Personalized Notification System (20x CTR Improvement)

The Challenge

Our notification system had a 0.5% click-through rate and was actually driving users away from the platform. Notifications were generic, poorly timed, and often irrelevant.

The Solution

I led a comprehensive overhaul of the notification system, transforming it into a highly personalized, contextual experience.

Key Features:

Impact:

Technical Highlights

I implemented a sophisticated scoring system that considers:

The pagination system uses time-based windows rather than offset-based pagination, making it efficient even for users with thousands of notifications.

5. World’s First Polymarket Integration

The Challenge

Polymarket is the world’s largest decentralized betting market, operating across multiple blockchains. No other app had successfully integrated it. The challenge was:

The Solution

I achieved the first-ever successful web3 integration of Polymarket in 3.5 weeks through reverse engineering.

Key Achievements:

Impact:

6. Cross-Platform Social Graph Infrastructure

The Challenge

Building a social media platform that aggregates data from Twitter, Lens, and Farcaster required querying relationship data across billions of nodes in real-time.

The Solution

I architected a high-performance multiplatform infrastructure that manages social connection data at massive scale.

Key Features:

Impact:

Graph Query Optimization

The key insight was that social graphs are extremely sparse — most people are connected to a tiny fraction of all users. By using partial indexes on the most active users and connection types, we achieved:

7. Optimized Onboarding Experience

The Challenge

New users were experiencing a 5-minute wait during onboarding while we generated their personalized feed. This led to significant drop-off.

The Solution

I re-engineered the entire data processing flow to deliver personalized content instantly.

Key Achievements:

Impact:

Optimization Techniques

  1. Pre-computation: Calculated likely connections based on wallet addresses before user completes signup
  2. Parallel fetching: Queried multiple platforms simultaneously
  3. Smart caching: Cached common friend graphs
  4. Progressive loading: Showed initial results immediately while loading more in background

8. Scheduled & Draft Posts Infrastructure

The Challenge

Users wanted the ability to schedule posts and save drafts across multiple platforms (Twitter, Farcaster, Lens), but this required:

The Solution

I architected a highly-reliable system for scheduled and draft posts with cross-platform capabilities.

Key Features:

Impact:

9. Enhanced People Recommendation System

The Challenge

Recommending relevant people to follow required understanding connections across multiple platforms simultaneously.

The Solution

I developed an algorithm for people recommendations that considers multi-platform social intersections.

Key Features:

Impact:

10. Engagement API Optimization (20x Performance Boost)

The Challenge

Critical engagement actions (likes, posts, replies, follows) had p75 latency of 2.5 seconds, making the app feel sluggish.

The Solution

I optimized the entire engagement pipeline through database query optimization, caching, and workflow improvements.

Impact:

Optimization Strategies

  1. Query optimization: Reduced N+1 queries through eager loading
  2. Redis caching: Cached user engagement state
  3. Async processing: Moved non-critical operations to background
  4. Database indexes: Added covering indexes for common queries
  5. Connection pooling: Optimized database connection management

11. Feed Engagement Metrics & Viral Content Detection

The Challenge

Detecting viral content and understanding engagement patterns in a heterogeneous social media graph (spanning multiple platforms) was an open-ended problem with no established solution.

The Solution

I developed an optimized algorithm to detect engagement loops using advanced graph traversal techniques.

Key Features:

Impact:

12. UserDeviceTracker System (Breaking Changes Without Breaking)

The Challenge

Rolling out breaking changes (non-backward compatible features) while maintaining app stability for users on older versions.

The Solution

I developed a UserDeviceTracker system that manages feature rollouts based on app version and OS version.

Key Features:

Impact:

Lessons Learned

1. Start Simple, Scale Smart

Many of my systems started with simple implementations and evolved based on real usage patterns. The job scheduler, for instance, initially had fixed parallelism before I added auto-scaling.

2. Measure Everything

Having detailed metrics was crucial for optimization. I could only achieve 20x performance improvements because I knew exactly where the bottlenecks were.

3. User Experience is Technical

The “wow” moment during onboarding wasn’t just good UX design — it was the result of aggressive technical optimization that made instantaneous results possible.

4. Algorithms Matter

Competitive programming experience directly translated to real-world impact. DSU for identity resolution, graph algorithms for engagement detection, and binary splitting for job scheduling all came from algorithmic thinking.

5. Ownership Drives Impact

Having full ownership of large surface areas meant I could make holistic decisions rather than point solutions. This led to more impactful improvements.

Technical Highlights

Conclusion

My time at 0xPPL was incredibly rewarding. Being part of an elite engineering team pushed me to deliver my best work, and having ownership of critical systems taught me how to think about scalability, user experience, and system reliability at every level.

The systems I built continue to serve thousands of users, and the lessons I learned will guide my engineering decisions for years to come.

Currently, I’m bringing these learnings to my role as a Founding Engineer at Share.xyz, where I’m building collaborative tools from the ground up.

If you’re working on challenging problems in backend systems, distributed systems, or algorithms, I’d love to connect! Reach out at [email protected].


This post was last updated on November 10, 2024.


Share this post on: