Nov
26
- by Adrian Thompson
- 0 Comments
Most Python developers waste hours on tasks that could be done in seconds-if they knew the right tricks. You’ve probably written a loop to count items, manually formatted strings, or rebuilt a list from scratch when Python already had a one-liner for it. These aren’t obscure hacks. They’re built-in features that every pro uses daily. This guide cuts through the noise and gives you the Python tricks that actually move the needle in real projects.
Use List Comprehensions Instead of Loops
Looping to create a new list is the most common Python mistake beginners make. It’s not wrong-it’s just slow and messy. List comprehensions are faster, cleaner, and more readable.
Instead of this:
squares = []
for x in range(10):
squares.append(x ** 2)
Write this:
squares = [x ** 2 for x in range(10)]
It’s not just shorter. It runs about 20% faster in most cases. And it’s immediately clear what you’re doing. You can even add conditions:
even_squares = [x ** 2 for x in range(10) if x % 2 == 0]
This filters even numbers and squares them in one line. No extra variables. No temporary lists. Just pure Python.
Unpack Variables Without Temporary Variables
Ever written code like this?
temp = a
a = b
b = temp
That’s 2005-style Python. Today, you unpack in one line:
a, b = b, a
It works with any number of values. Swap three variables? Easy:
x, y, z = z, x, y
Or pull values from a function that returns a tuple:
def get_user_info():
return "Alice", 28, "Atlanta"
name, age, city = get_user_info()
You don’t need to store the whole tuple. Just grab what you need. And if you want to ignore one value, use an underscore:
name, _, city = get_user_info()
That’s not just syntax-it’s a mindset shift. Stop storing everything. Extract only what matters.
Use enumerate() When You Need Indexes
Looping with range(len(list)) is a red flag. It’s verbose, error-prone, and slow.
Instead of this:
items = ["apple", "banana", "cherry"]
for i in range(len(items)):
print(i, items[i])
Use this:
for i, item in enumerate(items):
print(i, item)
It’s cleaner, safer, and faster. You can even start counting from 1:
for i, item in enumerate(items, 1):
print(f"{i}. {item}")
Output:
1. apple
2. banana
3. cherry
This isn’t just a trick-it’s the standard way professional Python developers write loops.
Use set() for Fast Membership Tests
Checking if an item exists in a list? That’s a recipe for slow code. Lists are O(n). Sets are O(1).
Imagine you’re checking if 10,000 user IDs are valid. With a list, you’re doing 10,000 comparisons per check. With a set, you’re doing one.
valid_ids = {101, 102, 103, 104, 105}
# Fast
if user_id in valid_ids:
print("Valid")
# Slow (don't do this)
valid_ids_list = [101, 102, 103, 104, 105]
if user_id in valid_ids_list:
print("Valid")
Convert your lists to sets when you only care about presence, not order. It’s a 10x to 100x speedup on large datasets.
Use collections.Counter for Counting
Counting how many times each word appears? Don’t write a loop with if key in dict. Use Counter.
from collections import Counter
text = "python is great python is powerful"
words = text.split()
counter = Counter(words)
print(counter)
# Output: Counter({'python': 2, 'is': 2, 'great': 1, 'powerful': 1})
print(counter.most_common(2))
# Output: [('python', 2), ('is', 2)]
It’s not just convenient-it’s optimized in C. You get the most frequent items, total counts, and even arithmetic operations between counters.
Need to subtract one set of counts from another? Easy:
counter1 = Counter(["a", "b", "c"])
counter2 = Counter(["b", "c", "d"])
result = counter1 - counter2
print(result)
# Output: Counter({'a': 1})
This is the kind of thing you’ll use every time you process logs, analyze text, or track user behavior.
Use pathlib for File Paths
Still using os.path.join()? That’s legacy code. pathlib is the modern way to handle files in Python 3.5+.
from pathlib import Path
# Old way
path = os.path.join("data", "logs", "app.log")
# New way
path = Path("data") / "logs" / "app.log"
# Check if file exists
if path.exists():
print("File found")
# Read content
content = path.read_text()
# Write content
path.write_text("new content")
# List all .txt files
for file in Path("data").glob("*.txt"):
print(file.name)
No more string concatenation. No more backslashes. No more platform-specific issues. Just clean, readable code that works everywhere.
Use __slots__ to Reduce Memory Usage
Running a Python app with thousands of objects? Memory adds up fast. By default, every object stores its attributes in a dictionary. That’s flexible-but heavy.
Use __slots__ to lock down the structure:
class Point:
__slots__ = ['x', 'y']
def __init__(self, x, y):
self.x = x
self.y = y
Now each instance uses 40% less memory. No __dict__ means faster attribute access too. This matters in data-heavy apps-like machine learning pipelines, game engines, or real-time analytics.
It’s not for every class. But if you’re creating thousands of simple objects, this trick saves resources without changing your logic.
Use itertools for Complex Iterations
Need to cycle through values? Group items? Combine multiple iterators? itertools has you covered.
Loop infinitely:
from itertools import cycle
colors = cycle(["red", "green", "blue"])
for _ in range(10):
print(next(colors))
Group consecutive items:
from itertools import groupby
numbers = [1, 1, 2, 2, 2, 3, 4, 4]
for key, group in groupby(numbers):
print(key, len(list(group)))
Output:
1 2
2 3
3 1
4 2
Generate all pairs:
from itertools import combinations
items = ["a", "b", "c"]
for pair in combinations(items, 2):
print(pair)
Output:
('a', 'b')
('a', 'c')
('b', 'c')
These aren’t gimmicks. They’re tools for building clean, efficient pipelines without writing custom loops.
Use functools.lru_cache to Avoid Repeating Work
Got a function that recalculates the same values over and over? Use caching.
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n-1) + fibonacci(n-2)
print(fibonacci(50)) # Instant, even though it's recursive
This caches the last 128 results. No more redundant calculations. Perfect for recursive algorithms, API calls, or expensive data transformations.
It’s one line of code. Zero extra dependencies. And it can turn a 10-second function into a 0.01-second one.
Use typing to Catch Errors Early
Python is dynamic, but that doesn’t mean you should write code blind. Type hints aren’t for show-they’re for catching bugs before they hit production.
from typing import List, Dict, Optional
def process_users(users: List[Dict[str, str]]) -> Optional[str]:
if not users:
return None
return users[0]["name"]
Tools like mypy or your IDE will warn you if you pass a string instead of a list. You’ll catch mistakes before tests run. It’s not about strictness-it’s about clarity and safety.
Even if you don’t run type checking, it makes your code self-documenting. Other devs (and future you) will thank you.
Use contextlib for Cleaner Resource Management
Always remember to close files, database connections, or network sockets? Use contextlib to automate it.
from contextlib import contextmanager
@contextmanager
def database_connection():
conn = connect_to_db()
try:
yield conn
finally:
conn.close()
# Use it like this:
with database_connection() as conn:
result = conn.query("SELECT * FROM users")
# Connection closes automatically, even if an error happens
This ensures cleanup happens every time. No more forgotten conn.close(). No more memory leaks. Just clean, reliable code.
Python’s standard library is packed with tools like this. You don’t need third-party packages to write better code. You just need to know what’s already there.
Stop Writing Boilerplate. Start Writing Python.
These tricks aren’t about being clever. They’re about being efficient. Every one of them reduces bugs, cuts runtime, and makes your code easier to maintain.
Stop copying old tutorials. Stop writing loops when comprehensions exist. Stop using lists for lookups when sets are faster. Stop reinventing the wheel.
Python’s power isn’t in its syntax. It’s in its standard library. The more you use it, the less code you write-and the better your code becomes.
Start with one trick this week. Then another. Soon, you won’t even think about writing the old way. You’ll just write Python.
Are Python tricks safe to use in production code?
Yes, absolutely. The tricks in this guide use built-in Python features-list comprehensions, pathlib, Counter, lru_cache-all part of the standard library since Python 3.5+. They’re battle-tested, widely used, and supported by the Python core team. Companies like Instagram, Dropbox, and Spotify rely on these exact patterns. The only "trick" to avoid is third-party magic or undocumented behavior. Stick to documented, standard tools, and you’re safe.
Do I need to memorize all of these tricks?
No. You don’t need to memorize them. You need to recognize when they apply. Start by replacing one loop with a list comprehension. Then use enumerate() next time you need an index. Use pathlib for file paths instead of os.path. Over time, these become automatic. Think of them like keyboard shortcuts-you don’t memorize them, you just start using them because they’re faster.
What if my team uses older Python versions?
Most of these tricks work in Python 3.6+, which is the minimum supported version as of 2025. If you’re stuck on Python 3.5, you can still use pathlib, Counter, and lru_cache. Only __slots__ and some typing features are less common in older versions. If your team uses Python 2.7, that’s the real problem-upgrade first. No trick compensates for running unsupported software.
Do these tricks make code harder to read for beginners?
Not if you use them correctly. A list comprehension is clearer than a 5-line loop when it’s simple. The problem isn’t the trick-it’s overcomplicating. Don’t nest three comprehensions or chain 10 itertools functions. Write one clear, focused expression. If someone new reads it and says, "Oh, I get it," you’ve done it right. Clarity beats cleverness every time.
Which of these tricks gives the biggest performance boost?
Switching from list to set for membership tests. If you’re checking if 1,000 items exist in a list of 10,000, you’re doing 10 million comparisons. With a set, you do 1,000. That’s a 10,000x speedup in worst-case scenarios. It’s the single biggest performance win you can get without rewriting your algorithm.