repligram
Fully-responsive Instagram clone
Repligram is a fully-responsive Instagram clone. Posts with multi-image carousels, comments with nested replies, likes on everything, DMs with emoji reactions, a follow system, notifications, and bookmarks. I built it to understand what it takes to ship a feature-complete social app — the real complexity isn't any single feature, it's how they all interact.
Architecture
The stack is Next.js 16 (App Router) with tRPC v11 for the API layer, Drizzle ORM with PostgreSQL, Clerk for auth, Ably for real-time, and UploadThing for image uploads. The tRPC layer has 6 routers (posts, comments, likes, messages, notifications, user) that delegate to a service layer — routers handle auth extraction and input validation, services own the actual database queries and business logic. This keeps each piece independently testable and makes it easy to find where things live.
The database has 12 tables with cascade deletes throughout. IDs are 12-char base62 strings from nanoid. The users table uses Clerk's user ID directly as the primary key, so there's no mapping layer between auth and data.
Cursor-based pagination everywhere
Every paginated query — posts, comments, replies, notifications, bookmarks — uses the same two-field cursor pattern with (createdAt, id). The createdAt field handles ordering, and id is a tie-breaker for items created at the same timestamp. This is keyset pagination, which is stable under concurrent writes — no skipped or duplicated items, unlike OFFSET-based pagination where an insert shifts everything.
const items = await db
.select()
.from(posts)
.leftJoin(users, eq(posts.userId, users.id))
.where(
cursor
? or(
// Get posts created before the cursor date
lt(posts.createdAt, cursor.createdAt),
// Or same timestamp but smaller ID (tie-breaker)
and(
eq(posts.createdAt, cursor.createdAt),
lt(posts.id, cursor.id),
),
)
: undefined,
)
.orderBy(desc(posts.createdAt), desc(posts.id))
.limit(limit + 1);
let nextCursor: typeof cursor | undefined = undefined;
if (items.length > limit) {
items.pop();
const lastReturnedItem = items[items.length - 1]!;
nextCursor = {
id: lastReturnedItem.posts.id,
createdAt: lastReturnedItem.posts.createdAt,
};
}
return { items, nextCursor };The limit + 1 trick: query for one extra row. If you get it, there's a next page — pop the extra and use it as the next cursor. If not, you're at the end. On the client, TanStack Query's useInfiniteQuery handles the rest, with an IntersectionObserver on a sentinel div to trigger fetchNextPage() on scroll.
Real-time as enhancement, not requirement
Ably provides managed pub/sub with channels per entity — likes:post:{id}, conversation:{id}, notifications:{userId}, and a global presence channel for online indicators. But every Ably publish is wrapped in try/catch. If Ably is down, the like was still persisted, the message was still sent, the notification was still created. The app works without real-time — it just feels slower.
// After the DB write — real-time is an enhancement, not a requirement
try {
const [likeCount] = await db
.select({ count: count() })
.from(likes)
.where(eq(likes.postId, postId));
const channelName = `likes:post:${postId}`;
await ably.channels.get(channelName).publish('like_update', {
type: 'post_like_toggled',
postId,
userId,
isLiked,
count: likeCount?.count ?? 0,
timestamp: new Date(),
});
} catch (ablyError) {
console.error('Failed to publish like update to Ably:', ablyError);
// Don't throw — the like was still toggled successfully
}The trade-off with Ably over raw websockets: I get reconnection, message ordering, and presence out of the box, but at the cost of vendor lock-in and per-message pricing. For a project like this, the development speed was worth it — implementing reliable reconnection and message ordering from scratch is a surprisingly deep rabbit hole.
Separate like tables per entity
Instead of a single polymorphic likes table with entityType + entityId, I used three separate tables: likes (posts), comment_likes, and comment_reply_likes. More tables, but each one has real foreign key constraints and simple queries. A polymorphic table can't have FKs pointing to three different tables — you'd need application-level enforcement, which is the kind of thing that works until it doesn't.
The toggle pattern is identical across all three. Check if a like exists, delete or insert, fire off a notification (if it's not your own content), then publish the new count to Ably:
const [existingLike] = await db
.select()
.from(likes)
.where(and(eq(likes.userId, userId), eq(likes.postId, postId)));
let isLiked: boolean;
if (existingLike) {
await db
.delete(likes)
.where(and(eq(likes.userId, userId), eq(likes.postId, postId)));
isLiked = false;
} else {
const [newLike] = await db
.insert(likes)
.values({ userId, postId })
.returning();
isLiked = true;
// Notification — fire-and-forget, only if not liking own post
if (newLike && postWithOwner.postOwnerId !== userId) {
try {
await createNotification({
recipientId: postWithOwner.postOwnerId,
actorId: userId,
type: 'like',
postId,
likeId: newLike.id,
});
} catch (notificationError) {
console.error('Failed to create like notification:', notificationError);
}
}
}Conversation soft-delete
DMs use a two-participant conversation model. Participants are always stored in lexicographic order so a unique(participant1Id, participant2Id) constraint works regardless of who initiates. When a user "deletes" a conversation, I set their participantNDeletedAt timestamp instead of removing the row. Each user gets their own view — the other participant is unaffected.
// Delete = set a per-participant timestamp, not remove the row
const isParticipant1 = conv.participant1Id === userId;
const now = new Date();
await db
.update(conversations)
.set({
participant1DeletedAt: isParticipant1 ? now : conv.participant1DeletedAt,
participant2DeletedAt: !isParticipant1 ? now : conv.participant2DeletedAt,
})
.where(eq(conversations.id, conversationId));Message queries then filter on this timestamp. Only messages created after the deletion are shown. If a user sends a new message to a "deleted" conversation, it reappears with only the new messages visible.
// Only show messages created after the user's deletion timestamp
const userDeletedAt = isParticipant1
? conv.participant1DeletedAt
: conv.participant2DeletedAt;
const whereConditions = [eq(messages.conversationId, conversationId)];
if (userDeletedAt) {
whereConditions.push(gt(messages.createdAt, userDeletedAt));
}
if (cursor) {
whereConditions.push(lt(messages.createdAt, new Date(cursor)));
}
const messagesData = await db
.select()
.from(messages)
.where(and(...whereConditions))
.orderBy(desc(messages.createdAt))
.limit(limit + 1);export const conversations = pgTable(
'conversations',
{
id: varchar('id').$defaultFn(() => createId()).primaryKey(),
// always stored in consistent (sorted) order
participant1Id: varchar('participant1_id', { length: 32 })
.references(() => users.id, { onDelete: 'cascade' }).notNull(),
participant2Id: varchar('participant2_id', { length: 32 })
.references(() => users.id, { onDelete: 'cascade' }).notNull(),
// per-participant soft-delete (null = not deleted)
participant1DeletedAt: timestamp('participant1_deleted_at'),
participant2DeletedAt: timestamp('participant2_deleted_at'),
// read receipts
participant1LastSeenAt: timestamp('participant1_last_seen_at'),
participant2LastSeenAt: timestamp('participant2_last_seen_at'),
...lifecycleDates,
},
(table) => [
unique('conversations_participants_unique').on(
table.participant1Id,
table.participant2Id,
),
],
);AI alt-text generation
Every uploaded image gets alt text generated by GPT-4o-mini via the Vercel AI SDK — a 20-second timeout, 18-word limit, and a fallback to the filename if it fails. The AI call runs in Promise.all across all images in a post, so it's parallel. If a user provides their own alt text in the upload form, the AI is skipped entirely. The Create form has an "Accessibility" accordion where users can review and edit alt text before posting.
export const generateAltText = async (imagePath: string) => {
const { text } = await generateText({
model: openai_4o_mini,
system:
'You will receive an image. Create an alt text for the image. ' +
'Be concise. Use adjectives when necessary. ' +
'Use simple language. No more than 18 words.',
abortSignal: AbortSignal.timeout(20000),
messages: [
{
role: 'user',
content: [{ type: 'image', image: imagePath }],
},
],
});
return text;
};
// Called with fallback in createPost:
await Promise.all(
input.files.map(async (file) => {
if (!file.alt) {
try {
file.alt = await generateAltText(file.url);
} catch (error) {
if (error instanceof Error && error.name === 'TimeoutError') {
console.error(`Alt text timed out for ${file.name}`);
}
file.alt = file.name; // fallback to filename
}
}
}),
);Notification safety
Notifications are fire-and-forget everywhere. Every createNotification call is wrapped in try/catch at the call site — a failed notification never fails a post, like, or comment. Self-action suppression is enforced both at the call site (check before calling) and inside createNotification itself (throws on same actor/recipient), so liking your own post silently produces no notification.
Deduplication prevents notification storms: before inserting, the service checks for an existing notification with the same actor, recipient, type, and entity within the last hour. Rapid like toggles or repeated comments don't flood the recipient's notification list.
// Self-action suppression — liking your own post creates no notification
if (input.actorId === input.recipientId) {
throw new TRPCError({
code: 'BAD_REQUEST',
message: 'Cannot create notification for self-action',
});
}
// Deduplication — same (actor, recipient, type, entity) within the last hour
const oneHourAgo = new Date(Date.now() - 60 * 60 * 1000);
const existingNotification = await db
.select()
.from(notifications)
.where(
and(
eq(notifications.recipientId, input.recipientId),
eq(notifications.actorId, input.actorId),
eq(notifications.type, input.type),
gt(notifications.createdAt, oneHourAgo),
...typeSpecificConditions, // entity-specific FK checks
),
)
.limit(1);
if (existingNotification[0]) {
return existingNotification[0]; // skip insert, return existing
}The hard parts
Optimistic like toggles were trickier than expected. The UI flips the like count and heart icon immediately, before the mutation completes. Then the Ably subscription receives the authoritative server values and overwrites the optimistic state. If two users like simultaneously, both clients converge to the correct count. But there's a brief moment where the count can be wrong — acceptable for the UX gain.
Real-time memory management in the chat took iteration. The useChatMessages hook merges server-fetched pages with real-time messages in a Map. A cleanup interval evicts real-time messages older than 5 minutes or trims to the 10 most recent — without this, a long chat session would accumulate unbounded state. Reaction updates live in a separate Map, and both are cleared on conversation switch.
Race conditions in follows: the toggleFollow insert can hit a unique constraint if another request beats it. Instead of using a transaction, the catch block checks for the specific constraint name and returns { action: "followed", isFollowing: true } — the idiomatic "try insert, handle conflict" pattern.
Username generation happens on Clerk webhook. When a user signs up, a user.created event fires and GPT-4o-mini generates a username (up to 3 attempts with DB uniqueness checks, falling back to name + timestamp). The generated username is written back to Clerk via clerkClient().users.updateUser(). The service blocks reserved names like home, explore, messages, settings to prevent URL collisions with app routes.
Batch N+1 elimination in the conversations list: getUserConversations fetches all conversations in one query, all unique participant records in one inArray query, and latest messages using a row_number() window function partitioned by conversation — no N+1 queries for any of the three data sets.