with open("notes.txt", "r") as f:
contents = f.read()
print(contents)
with open(...) as f: is the safe pattern — Python closes the file for you automatically when the indented block ends.
with open("output.txt", "w") as f:
f.write("Hello!\n")
f.write("Second line.\n")
"w" overwrites; "a" appends.
with open("notes.txt") as f:
for line in f:
print(line.strip())
from pathlib import Path
home = Path.home() # ~/
desktop = home / "Desktop" # ~/Desktop
output = desktop / "report.xlsx" # ~/Desktop/report.xlsx
output.exists() # True/False
output.name # 'report.xlsx'
output.suffix # '.xlsx'
output.parent # PosixPath('~/Desktop')
The / operator joins paths in the correct way for Mac/Linux/Windows. Use this everywhere instead of string concatenation.
from pathlib import Path
folder = Path("reports/2026/q1")
for path in folder.glob("*.xlsx"):
print(path.name, path.stat().st_size)
glob("*.xlsx") = "every .xlsx in this folder."
rglob("*.xlsx") = "every .xlsx in this folder and all subfolders."
data/
2026-01/
sales.csv
returns.csv
2026-02/
sales.csv
...
from pathlib import Path
for p in Path("data").rglob("*.csv"):
print(p)
import pandas as pd
frames = [pd.read_csv(p) for p in Path("data").rglob("sales.csv")]
combined = pd.concat(frames, ignore_index=True)
print(len(combined))
Six lines of code, dozens of files, one DataFrame. This pattern alone has saved many an analyst's morning.
with open(...) as f: is the safe way to read/write text files.pathlib.Path + / builds cross-platform paths.glob("*.csv") and rglob("*.csv") iterate every file matching a pattern.pd.concat() to merge many files into one DataFrame.Write a script that lists every file in your Downloads folder along with its size. Print a one-line total at the end.