df["date"] = pd.to_datetime(df["date"], errors="coerce")
errors="coerce" turns unparsable strings into NaT (missing date) instead of crashing.
df["year"] = df["date"].dt.year
df["month"] = df["date"].dt.month
df["dow"] = df["date"].dt.day_name() # 'Saturday'
df["q"] = df["date"].dt.to_period("Q") # PeriodIndex 2026Q2
df["month_start"] = df["date"].dt.to_period("M").dt.to_timestamp()
df["days_open"] = (df["closed"] - df["opened"]).dt.days
df["due_date"] = df["created"] + pd.Timedelta(days=30)
df[df["date"] >= "2026-01-01"]
monthly = (df.set_index("date")["amount"]
.resample("M")
.sum())
weekly = df.set_index("date")["amount"].resample("W").mean()
df["date"] = pd.to_datetime(df["date"])
monthly = (df.groupby(df["date"].dt.to_period("M"))["amount"]
.sum()
.reset_index())
monthly["date"] = monthly["date"].astype(str)
monthly.to_excel("monthly_revenue.xlsx", index=False)
pd.to_datetime(..., errors="coerce") is the safe parse..dt gives you year, month, day, day_name, to_period..resample("M") on a datetime index gives quick time-bucket aggregates.From a transactions DataFrame with date + amount, compute the average daily revenue per week of the year.