AE

Abdulhamid Eshmamatov

Software Engineer (Python) & Team Lead
📍 Tashkent, Uzbekistan ✈️ @abdulhamidllll ✉️ abdulhamidhamidiy@gmail.com 🌐 abdulhamid.uz 💼 LinkedIn
Python Django Microservices Team Lead 5+ years exp

Work Experience

UZINFOCOM
Team Lead
Nov 2023 — Present
● Current
  • Architected and delivered large-scale Python projects including API Gateway for microservices routing and webhook systems for real-time third-party integrations.
  • Designed and managed microservices-based architecture with 20+ microservices.
  • Implemented Kafka for event-driven architecture handling millions of messages daily with efficient log management.
  • Built high-performance dashboards with ClickHouse — achieved 100× query speed improvement on tables with millions of rows.
  • Created and optimized complex PostgreSQL queries with Materialized Views, Indexing and Replication for high availability.
  • Deployed large-scale projects using Docker with robust, efficient CI/CD pipelines.
  • Managed high-volume file storage with Zenko and MinIO, ensuring seamless data replication.
DBM
Python Developer
Sep 2021 — Nov 2023
  • Built and maintained Python applications using Django and Django REST Framework.
  • Implemented REST APIs with efficient database interactions and performance optimization best practices.
  • Used Redis and Celery for asynchronous task processing and caching.
  • Collaborated on system integration projects with PostgreSQL backend services.
Freelance
Python Developer — Telegram Bots & Web
Mar 2021 — Sep 2021
  • Developed custom Telegram bots for real estate, service booking, and payment integrations (Click, Payme).
  • Integrated APIs, databases, and deployed projects on VPS (Ubuntu, Nginx).

Technical Skills

Languages & Frameworks
Python Django Django REST Framework Bash
Databases
PostgreSQL ClickHouse Redis SQLite
Messaging & Async
Kafka Celery WebSockets
DevOps & Infrastructure
Docker Nginx Linux MinIO ELK Grafana
Tools & Practices
Git / GitHub / GitLab REST API SOLID OOP Algorithms & DS Payments (Click, Payme) pandas NumPy

Education

National University of Uzbekistan
Bachelor's degree · Mathematics
2021

PostgreSQL Deep Dive

PostgreSQL Performance Internals: Views, Indexes, VACUUM, Replication & EXPLAIN

After working with PostgreSQL at scale — managing databases with millions of rows across 20+ microservices at UZINFOCOM — I've relied on these core features daily to keep systems fast and reliable. Here's a practical breakdown of each.

VIEW — Virtual Table

A VIEW is a named query stored in the database. It executes the underlying SQL every time it is referenced — there is no cached result. Views are ideal for simplifying complex joins, enforcing access control, and creating reusable query logic.

CREATE VIEW active_users AS
  SELECT id, username, email
  FROM users
  WHERE is_active = TRUE;

Use case: encapsulate business logic; no additional disk space used. Every query against the view re-runs the SQL.

MATERIALIZED VIEW — Cached Query Results

A MATERIALIZED VIEW physically stores the result of the query on disk. Unlike a regular view, it does not re-execute on every access — making it dramatically faster for heavy aggregations. At UZINFOCOM we achieved 100× query speed improvements by replacing slow subqueries with materialized views on analytics dashboards. The trade-off: data must be refreshed explicitly with REFRESH MATERIALIZED VIEW.

CREATE MATERIALIZED VIEW monthly_revenue AS
  SELECT DATE_TRUNC('month', created_at) AS month,
         SUM(amount) AS total
  FROM orders
  GROUP BY 1;

-- Refresh without locking reads:
REFRESH MATERIALIZED VIEW CONCURRENTLY monthly_revenue;

Tip: use CONCURRENTLY to avoid table locks during refresh; requires a unique index on the materialized view.

INDEX — Fast Lookups

Indexes allow PostgreSQL to locate rows without scanning the entire table. The default B-Tree index handles equality and range queries. Other types include GIN (full-text search, JSONB), GiST (geometric / range types), BRIN (very large, physically ordered tables), and Hash (equality only).

-- Standard B-Tree
CREATE INDEX idx_orders_user ON orders(user_id);

-- Partial index (only index active rows)
CREATE INDEX idx_active_orders ON orders(user_id)
  WHERE status = 'active';

-- Composite index
CREATE INDEX idx_user_date ON orders(user_id, created_at DESC);

-- GIN for JSONB
CREATE INDEX idx_meta ON events USING GIN (metadata);

Over-indexing slows down writes. Audit unused indexes with pg_stat_user_indexes.

VACUUM — Dead Tuple Cleanup

PostgreSQL uses MVCC (Multi-Version Concurrency Control): old row versions (dead tuples) remain on disk after UPDATE and DELETE. VACUUM reclaims that space and updates visibility maps. VACUUM FULL rewrites the table entirely (locks the table — avoid on production without maintenance windows). AUTOVACUUM handles this automatically; tune it for high-write tables.

-- Standard vacuum (no lock)
VACUUM orders;

-- With statistics update
VACUUM ANALYZE orders;

-- Check bloat
SELECT relname, n_dead_tup, last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC;

REPLICATION — High Availability

PostgreSQL supports Streaming Replication (physical, byte-for-byte copy of WAL logs) and Logical Replication (row-level changes, table-selective, cross-version). In production systems at UZINFOCOM we run a primary + standby setup with synchronous commit for critical write paths and asynchronous for read replicas serving analytics.

-- primary: postgresql.conf
wal_level = replica
max_wal_senders = 5
synchronous_standby_names = 'standby1'

-- standby: recovery.conf / postgresql.conf
primary_conninfo = 'host=primary port=5432 user=replicator'
hot_standby = on

-- Logical replication
CREATE PUBLICATION my_pub FOR TABLE orders, users;
CREATE SUBSCRIPTION my_sub
  CONNECTION 'host=primary dbname=app user=replicator'
  PUBLICATION my_pub;

Monitor replication lag: SELECT * FROM pg_stat_replication;

EXPLAIN / EXPLAIN ANALYZE — Query Plan Inspection

EXPLAIN shows the query execution plan PostgreSQL will use (estimated costs, row counts, join strategies). EXPLAIN ANALYZE actually runs the query and adds real timing data — essential for identifying slow sequential scans, bad row estimates, or nested loops on large datasets.

-- Plan only (no execution)
EXPLAIN SELECT * FROM orders WHERE user_id = 42;

-- Plan + real timings (executes the query)
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
  SELECT u.username, COUNT(o.id)
  FROM users u
  JOIN orders o ON o.user_id = u.id
  GROUP BY u.username
  ORDER BY 2 DESC;

Key nodes to watch: Seq Scan (missing index?), Hash Join vs Nested Loop, high rows removed by filter. Use explain.dalibo.com to visualise plans.