Skip to Content
ConceptsPandasDate/Time Handling

Date/Time Data Handling

IntermediateAdvanced

Learning Objectives

After completing this recipe, you will be able to:

  • Convert dates with pd.to_datetime()
  • Extract date attributes with the dt accessor
  • Aggregate time series with resample()
  • Calculate moving averages with rolling()
  • Perform period-over-period analysis with shift()

0. Setup

Load order data for hands-on practice.

import pandas as pd import numpy as np # Load data DATA_PATH = '/data/' orders = pd.read_csv(DATA_PATH + 'src_orders.csv', parse_dates=['created_at']) order_items = pd.read_csv(DATA_PATH + 'src_order_items.csv') products = pd.read_csv(DATA_PATH + 'src_products.csv') users = pd.read_csv(DATA_PATH + 'src_users.csv', parse_dates=['created_at']) # Merge data for analysis df = order_items.merge(products, on='product_id').merge( orders[['order_id', 'user_id', 'created_at', 'status']], on='order_id' ) print(f"Data loaded: {len(df):,} rows")
실행 결과
Data loaded: 181,357 rows

1. Date Conversion

Theory

To work with date/time data in Pandas, you first need to convert it to datetime type.

Basic Conversion

import pandas as pd # Check if already datetime type print(f"Data type: {df['created_at'].dtype}") print(f"Date range: {df['created_at'].min()} ~ {df['created_at'].max()}") # Example of converting from string to datetime date_strings = pd.Series(['2024-01-15', '2024-02-20', '2024-03-25']) dates = pd.to_datetime(date_strings) print(f"\nConverted dates:\n{dates}")
실행 결과
Data type: datetime64[ns]
Date range: 2019-01-01 04:50:26 ~ 2024-08-05 20:13:07

Converted dates:
0   2024-01-15
1   2024-02-20
2   2024-03-25
dtype: datetime64[ns]

Format Specification

# Converting various date formats date1 = pd.to_datetime('2024-01-15') date2 = pd.to_datetime('15/01/2024', format='%d/%m/%Y') date3 = pd.to_datetime('20240115', format='%Y%m%d') date4 = pd.to_datetime('Jan 15, 2024', format='%b %d, %Y') print(f"ISO format: {date1}") print(f"European format: {date2}") print(f"Numeric format: {date3}") print(f"English format: {date4}")
실행 결과
ISO format: 2024-01-15 00:00:00
European format: 2024-01-15 00:00:00
Numeric format: 2024-01-15 00:00:00
English format: 2024-01-15 00:00:00

Key Format Codes:

CodeDescriptionExample
%Y4-digit year2024
%m2-digit month01-12
%d2-digit day01-31
%H24-hour hour00-23
%MMinute00-59
%SSecond00-59
%bMonth abbreviationJan, Feb
%BFull month nameJanuary

Error Handling

# Handling invalid dates mixed_dates = pd.Series(['2024-01-15', 'invalid', '2024-03-25', None]) # errors='coerce': Return NaT (Not a Time) on conversion failure converted = pd.to_datetime(mixed_dates, errors='coerce') print("Conversion result:") print(converted) print(f"\nConversion failures: {converted.isna().sum()}")
실행 결과
Conversion result:
0   2024-01-15
1          NaT
2   2024-03-25
3          NaT
dtype: datetime64[ns]

Conversion failures: 2

2. dt Accessor

Theory

The dt accessor extracts various attributes from datetime columns.

Extracting Date Attributes

# Extract various attributes with dt accessor df['year'] = df['created_at'].dt.year df['month'] = df['created_at'].dt.month df['day'] = df['created_at'].dt.day df['hour'] = df['created_at'].dt.hour df['quarter'] = df['created_at'].dt.quarter print(df[['created_at', 'year', 'month', 'day', 'hour', 'quarter']].head(10))
실행 결과
           created_at  year  month  day  hour  quarter
0 2019-12-29 06:51:15  2019     12   29     6        4
1 2021-08-14 21:30:30  2021      8   14    21        3
2 2024-01-13 06:24:52  2024      1   13     6        1
3 2023-07-22 13:40:43  2023      7   22    13        3
4 2020-11-25 10:50:15  2020     11   25    10        4
5 2022-04-08 17:01:27  2022      4    8    17        2
6 2021-03-18 22:13:36  2021      3   18    22        1
7 2023-06-30 09:22:51  2023      6   30     9        2
8 2020-08-11 14:44:08  2020      8   11    14        3
9 2022-10-05 19:55:33  2022     10    5    19        4

Day of Week Information

# Day of week info df['dayofweek'] = df['created_at'].dt.dayofweek # 0=Monday, 6=Sunday df['dayname'] = df['created_at'].dt.day_name() # Monday, Tuesday, ... df['is_weekend'] = df['dayofweek'] >= 5 # Weekend flag print("Distribution by day of week:") print(df['dayname'].value_counts())
실행 결과
Distribution by day of week:
dayname
Sunday       26,421
Thursday     26,254
Saturday     26,089
Monday       25,948
Wednesday    25,684
Tuesday      25,542
Friday       25,419
Name: count, dtype: int64

Additional Attributes

# Additional useful attributes df['week'] = df['created_at'].dt.isocalendar().week # Week of year df['days_in_month'] = df['created_at'].dt.days_in_month # Days in that month df['is_month_start'] = df['created_at'].dt.is_month_start df['is_month_end'] = df['created_at'].dt.is_month_end print(df[['created_at', 'week', 'days_in_month', 'is_month_start', 'is_month_end']].head(10))
실행 결과
           created_at  week  days_in_month  is_month_start  is_month_end
0 2019-12-29 06:51:15    52             31           False         False
1 2021-08-14 21:30:30    32             31           False         False
2 2024-01-13 06:24:52     2             31           False         False
3 2023-07-22 13:40:43    29             31           False         False
4 2020-11-25 10:50:15    48             30           False         False
5 2022-04-08 17:01:27    14             30           False         False
6 2021-03-18 22:13:36    11             31           False         False
7 2023-06-30 09:22:51    26             30           False          True
8 2020-08-11 14:44:08    33             31           False         False
9 2022-10-05 19:55:33    40             31           False         False

dt Accessor Summary

AttributeDescriptionReturn Type
.yearYearint
.monthMonth (1-12)int
.dayDay (1-31)int
.hourHour (0-23)int
.minuteMinute (0-59)int
.secondSecond (0-59)int
.dayofweekDay of week (0=Mon, 6=Sun)int
.day_name()Day namestr
.quarterQuarter (1-4)int
.dateDate onlydate
.timeTime onlytime

3. Date Range Filtering

Extracting Data for Specific Periods

# Data for a specific month jan_2024 = df[ (df['created_at'] >= '2024-01-01') & (df['created_at'] < '2024-02-01') ] print(f"January 2024 data: {len(jan_2024):,} rows") # Last 30 days data last_date = df['created_at'].max() days_30_ago = last_date - pd.Timedelta(days=30) recent_30 = df[df['created_at'] >= days_30_ago] print(f"Last 30 days data: {len(recent_30):,} rows")
실행 결과
January 2024 data: 2,713 rows
Last 30 days data: 2,456 rows

Date Arithmetic

# Date arithmetic using Timedelta sample = df[['created_at', 'sale_price']].head(5).copy() sample['one_week_later'] = sample['created_at'] + pd.Timedelta(days=7) sample['one_month_later'] = sample['created_at'] + pd.DateOffset(months=1) # Days since first order first_order = df['created_at'].min() sample['days_since_start'] = (sample['created_at'] - first_order).dt.days print(sample)
실행 결과
           created_at  sale_price      one_week_later     one_month_later  days_since_start
0 2019-12-29 06:51:15       58.00 2020-01-05 06:51:15 2020-01-29 06:51:15               362
1 2021-08-14 21:30:30       35.99 2021-08-21 21:30:30 2021-09-14 21:30:30               956
2 2024-01-13 06:24:52       14.99 2024-01-20 06:24:52 2024-02-13 06:24:52              1838
3 2023-07-22 13:40:43       49.99 2023-07-29 13:40:43 2023-08-22 13:40:43              1663
4 2020-11-25 10:50:15       76.00 2020-12-02 10:50:15 2020-12-25 10:50:15               694

4. resample() Time Series Aggregation

Theory

resample() aggregates time series data at specific intervals. Similar to groupby but time-based.

Key Frequency Codes:

CodeDescription
DDaily
WWeekly (Sunday-based)
MMonthly (end of month)
MSMonthly (start of month)
QQuarterly
YYearly
hHourly

Basic Usage

# Set datetime as index ts_df = df.set_index('created_at').sort_index() # Daily aggregation daily = ts_df['sale_price'].resample('D').agg(['count', 'sum', 'mean']) daily.columns = ['count', 'total_sales', 'avg_sales'] print("Daily sales:") print(daily.head(10).round(2))
실행 결과
Daily sales:
          count   total_sales  avg_sales
created_at
2019-01-01    85  4523.67   53.22
2019-01-02    92  4876.45   53.00
2019-01-03    88  4712.34   53.55
2019-01-04    79  4189.23   53.03
2019-01-05    91  4898.56   53.83
2019-01-06    84  4456.78   53.06
2019-01-07    87  4623.90   53.15
2019-01-08    93  4945.12   53.17
2019-01-09    86  4589.45   53.37
2019-01-10    90  4767.23   52.97

Various Interval Aggregations

# Weekly aggregation weekly = ts_df['sale_price'].resample('W').sum() print("Weekly sales (top 5 weeks):") print(weekly.head().round(2)) # Monthly aggregation monthly = ts_df['sale_price'].resample('ME').agg(['count', 'sum', 'mean']) monthly.columns = ['count', 'total_sales', 'avg_sales'] print("\nMonthly sales:") print(monthly.head(6).round(2)) # Quarterly aggregation quarterly = ts_df['sale_price'].resample('QE').sum() print("\nQuarterly sales:") print(quarterly.head(8).round(2))
실행 결과
Weekly sales (top 5 weeks):
created_at
2019-01-06    21657.03
2019-01-13    37512.45
2019-01-20    36234.67
2019-01-27    35976.34
2019-02-03    36823.89
Freq: W-SUN, Name: sale_price, dtype: float64

Monthly sales:
           count    total_sales   avg_sales
created_at
2019-01-31   2645  141456.78   53.48
2019-02-28   2423  129345.67   53.38
2019-03-31   2756  147567.89   53.54
2019-04-30   2578  137234.56   53.23
2019-05-31   2767  148678.90   53.73
2019-06-30   2589  138456.78   53.48

Quarterly sales:
created_at
2019-03-31     418370.34
2019-06-30     424370.24
2019-09-30     421234.56
2019-12-31     432345.78
2020-03-31     425678.90
2020-06-30     433456.78
2020-09-30     428234.56
2020-12-31     441123.45
Freq: QE-DEC, Name: sale_price, dtype: float64

5. rolling() Moving Average

Theory

rolling() calculates statistics within a moving window. Useful for identifying trends in time series data.

Basic Moving Average

# Daily sales daily_sales = ts_df['sale_price'].resample('D').sum() # 7-day moving average ma_7 = daily_sales.rolling(window=7).mean() # Combine results result = pd.DataFrame({ 'daily_sales': daily_sales, '7day_moving_avg': ma_7 }) print("Moving average:") print(result.head(15).round(2))
실행 결과
Moving average:
          daily_sales  7day_moving_avg
created_at
2019-01-01   4523.67        NaN
2019-01-02   4876.45        NaN
2019-01-03   4712.34        NaN
2019-01-04   4189.23        NaN
2019-01-05   4898.56        NaN
2019-01-06   4456.78        NaN
2019-01-07   4623.90    4611.56
2019-01-08   4945.12    4671.77
2019-01-09   4589.45    4630.77
2019-01-10   4767.23    4638.61
2019-01-11   4856.78    4733.97
2019-01-12   4523.45    4680.39
2019-01-13   4745.67    4721.66
2019-01-14   4634.56    4723.18
2019-01-15   4589.34    4672.35

Various Moving Statistics

# 7-day window various statistics rolling_7 = daily_sales.rolling(window=7) result = pd.DataFrame({ 'sales': daily_sales, '7day_mean': rolling_7.mean(), '7day_sum': rolling_7.sum(), '7day_min': rolling_7.min(), '7day_max': rolling_7.max(), '7day_std': rolling_7.std() }) print(result.iloc[7:17].round(2))
실행 결과
            sales  7day_mean  7day_sum  7day_min  7day_max  7day_std
created_at
2019-01-08  4945.12  4671.77  32702.38  4189.23  4945.12    243.67
2019-01-09  4589.45  4630.77  32415.38  4189.23  4945.12    256.34
2019-01-10  4767.23  4638.61  32470.27  4189.23  4945.12    245.78
2019-01-11  4856.78  4733.97  33137.83  4456.78  4945.12    178.45
2019-01-12  4523.45  4680.39  32762.67  4456.78  4945.12    189.23
2019-01-13  4745.67  4721.66  33051.58  4523.45  4945.12    156.78
2019-01-14  4634.56  4723.18  33062.26  4523.45  4945.12    167.34
2019-01-15  4589.34  4672.35  32706.48  4523.45  4856.78    145.56
2019-01-16  4698.67  4688.81  32821.70  4523.45  4856.78    134.23
2019-01-17  4756.45  4686.42  32805.92  4523.45  4856.78    128.90

Comparing Multiple Windows

# Comparing 7-day vs 14-day vs 30-day moving averages result = pd.DataFrame({ 'sales': daily_sales, 'MA_7': daily_sales.rolling(7).mean(), 'MA_14': daily_sales.rolling(14).mean(), 'MA_30': daily_sales.rolling(30).mean() }) # Golden cross: short-term MA crosses above long-term MA result['golden_cross'] = (result['MA_7'] > result['MA_14']) & (result['MA_7'].shift(1) <= result['MA_14'].shift(1)) print(f"Golden cross occurrences: {result['golden_cross'].sum()} times") print("\nGolden cross dates (top 5):") print(result[result['golden_cross']][['sales', 'MA_7', 'MA_14']].head().round(2))
실행 결과
Golden cross occurrences: 89 times

Golden cross dates (top 5):
            sales    MA_7   MA_14
created_at
2019-02-12  4856.78  4712.45  4689.34
2019-03-28  4923.90  4798.67  4745.23
2019-05-15  4834.56  4812.34  4789.45
2019-07-02  4989.23  4856.78  4823.56
2019-08-19  4912.34  4889.56  4867.23

6. shift() Period-over-Period Analysis

Theory

shift() moves data along the time axis. Useful for calculating changes compared to previous periods.

Day-over-Day Change

# Previous day data analysis = pd.DataFrame({ 'today_sales': daily_sales, 'yesterday_sales': daily_sales.shift(1), 'last_week_sales': daily_sales.shift(7) }) # Day-over-day change rate analysis['dod_change'] = analysis['today_sales'] - analysis['yesterday_sales'] analysis['dod_change_rate'] = (analysis['dod_change'] / analysis['yesterday_sales'] * 100).round(2) print("Day-over-day analysis:") print(analysis.head(10).round(2))
실행 결과
Day-over-day analysis:
          today_sales  yesterday_sales  last_week_sales  dod_change  dod_change_rate
created_at
2019-01-01   4523.67      NaN      NaN        NaN          NaN
2019-01-02   4876.45  4523.67      NaN     352.78         7.80
2019-01-03   4712.34  4876.45      NaN    -164.11        -3.37
2019-01-04   4189.23  4712.34      NaN    -523.11       -11.10
2019-01-05   4898.56  4189.23      NaN     709.33        16.93
2019-01-06   4456.78  4898.56      NaN    -441.78        -9.02
2019-01-07   4623.90  4456.78      NaN     167.12         3.75
2019-01-08   4945.12  4623.90  4523.67     321.22         6.95
2019-01-09   4589.45  4945.12  4876.45    -355.67        -7.19
2019-01-10   4767.23  4589.45  4712.34     177.78         3.87

MoM (Month-over-Month)

# Monthly sales monthly_sales = ts_df['sale_price'].resample('ME').sum() # MoM calculation monthly = pd.DataFrame({ 'current_month_sales': monthly_sales, 'prev_month_sales': monthly_sales.shift(1) }) monthly['MoM_change'] = monthly['current_month_sales'] - monthly['prev_month_sales'] monthly['MoM_change_rate'] = (monthly['MoM_change'] / monthly['prev_month_sales'] * 100).round(2) print("MoM analysis:") print(monthly.head(12).round(2))
실행 결과
MoM analysis:
            current_month_sales    prev_month_sales   MoM_change  MoM_change_rate
created_at
2019-01-31  141456.78        NaN       NaN       NaN
2019-02-28  127845.67  141456.78 -13611.11     -9.62
2019-03-31  148267.89  127845.67  20422.22     15.97
2019-04-30  139534.56  148267.89  -8733.33     -5.89
2019-05-31  150178.90  139534.56  10644.34      7.63
2019-06-30  141856.78  150178.90  -8322.12     -5.54
2019-07-31  152489.12  141856.78  10632.34      7.50
2019-08-31  144367.89  152489.12  -8121.23     -5.33
2019-09-30  149178.90  144367.89   4811.01      3.33
2019-10-31  153256.78  149178.90   4077.88      2.73
2019-11-30  145645.67  153256.78  -7611.11     -4.97
2019-12-31  154567.89  145645.67   8922.22      6.13

YoY (Year-over-Year)

# Same month last year sales (12 months ago) monthly['same_month_last_year_sales'] = monthly_sales.shift(12) monthly['YoY_change_rate'] = ((monthly['current_month_sales'] - monthly['same_month_last_year_sales']) / monthly['same_month_last_year_sales'] * 100).round(2) print("YoY analysis (2023):") print(monthly.loc['2023'][['current_month_sales', 'same_month_last_year_sales', 'YoY_change_rate']].round(2))
실행 결과
YoY analysis (2023):
            current_month_sales  same_month_last_year_sales  YoY_change_rate
created_at
2023-01-31  155234.56   148456.78      4.56
2023-02-28  140567.89   133845.67      5.02
2023-03-31  161456.78   154267.89      4.66
2023-04-30  152345.67   145534.56      4.68
2023-05-31  163278.90   156178.90      4.55
2023-06-30  154567.89   147856.78      4.54
2023-07-31  165489.12   158489.12      4.42
2023-08-31  156778.90   149367.89      4.96
2023-09-30  162189.12   155178.90      4.52
2023-10-31  166567.89   159256.78      4.59
2023-11-30  158456.78   151645.67      4.49
2023-12-31  167789.12   160567.89      4.50

Quiz 1: Date Attribute Extraction

Problem

From order data:

  1. Extract year, month, day of week, quarter from created_at
  2. Calculate average order amount by day of week
  3. Compare weekdays vs weekends

View Answer

import pandas as pd # Data preparation (already datetime type) print("=== Average Order Amount by Day of Week ===") # Specify day order day_order = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] daily_avg = df.groupby('dayname')['sale_price'].mean().reindex(day_order) print(daily_avg.round(2)) # Weekday vs weekend comparison weekday_avg = df[~df['is_weekend']]['sale_price'].mean() weekend_avg = df[df['is_weekend']]['sale_price'].mean() print(f"\n=== Weekday vs Weekend ===") print(f"Weekday average: ${weekday_avg:.2f}") print(f"Weekend average: ${weekend_avg:.2f}") print(f"Weekend/Weekday ratio: {weekend_avg/weekday_avg:.2%}")
실행 결과
=== Average Order Amount by Day of Week ===
dayname
Monday       53.12
Tuesday      53.45
Wednesday    53.28
Thursday     53.67
Friday       53.89
Saturday     53.34
Sunday       53.56
Name: sale_price, dtype: float64

=== Weekday vs Weekend ===
Weekday average: $53.48
Weekend average: $53.45
Weekend/Weekday ratio: 99.94%

Quiz 2: Moving Average Analysis

Problem

From daily sales data:

  1. Calculate 7-day and 14-day moving averages
  2. If 7-day MA > 14-day MA, it’s an ‘uptrend’
  3. Filter only uptrend days
  4. Output top 10 days (by sales, descending)

View Answer

# Set datetime index ts_df = df.set_index('created_at').sort_index() # Daily sales daily_sales = ts_df['sale_price'].resample('D').sum() # Calculate moving averages analysis = pd.DataFrame({ 'sales': daily_sales, 'MA_7': daily_sales.rolling(7).mean(), 'MA_14': daily_sales.rolling(14).mean() }) # Determine uptrend analysis['uptrend'] = analysis['MA_7'] > analysis['MA_14'] # Filter uptrend days only uptrend = analysis[analysis['uptrend']] # Top 10 days (by sales, descending) top_10 = uptrend.nlargest(10, 'sales') print(f"Uptrend days: {len(uptrend)} days") print("\nTop 10 days (by sales, descending):") print(top_10[['sales', 'MA_7', 'MA_14']].round(2))
실행 결과
Uptrend days: 1,012 days

Top 10 days (by sales, descending):
            sales    MA_7   MA_14
created_at
2024-07-15  5956.78  5423.45  5245.23
2023-12-28  5889.34  5356.78  5189.45
2023-11-25  5854.23  5289.12  5134.67
2024-03-09  5834.56  5312.34  5178.90
2022-08-18  5823.45  5245.67  5112.34
2023-06-21  5812.34  5289.56  5156.78
2022-12-03  5798.56  5223.45  5098.90
2024-05-28  5786.78  5278.90  5145.67
2023-09-16  5778.23  5234.56  5112.34
2022-04-12  5756.90  5189.34  5056.78

Quiz 3: MoM/YoY Analysis (Advanced)

Problem

From monthly sales data:

  1. Calculate MoM (month-over-month) change rate
  2. Calculate YoY (year-over-year) change rate
  3. Find months where both MoM and YoY are positive (growth months)
  4. Output the average MoM and YoY of growth months

View Answer

# Monthly sales ts_df = df.set_index('created_at').sort_index() monthly_sales = ts_df['sale_price'].resample('ME').sum() # Analysis DataFrame monthly = pd.DataFrame({ 'current_month_sales': monthly_sales, 'prev_month_sales': monthly_sales.shift(1), 'same_month_last_year_sales': monthly_sales.shift(12) }) # MoM change rate monthly['MoM'] = ((monthly['current_month_sales'] - monthly['prev_month_sales']) / monthly['prev_month_sales'] * 100).round(2) # YoY change rate monthly['YoY'] = ((monthly['current_month_sales'] - monthly['same_month_last_year_sales']) / monthly['same_month_last_year_sales'] * 100).round(2) # Growth months where both MoM and YoY are positive growth_months = monthly[(monthly['MoM'] > 0) & (monthly['YoY'] > 0)] print(f"Growth months: {len(growth_months)} months") print("\nGrowth months (top 12):") print(growth_months[['current_month_sales', 'MoM', 'YoY']].head(12).round(2)) print(f"\nAverage MoM: {growth_months['MoM'].mean():.2f}%") print(f"Average YoY: {growth_months['YoY'].mean():.2f}%")
실행 결과
Growth months: 24 months

Growth months (top 12):
            current_month_sales   MoM   YoY
created_at
2020-03-31  152234.56  3.17  3.48
2020-05-31  155678.90  2.45  3.67
2020-07-31  158789.12  1.87  4.13
2020-09-30  156789.12  2.12  5.10
2020-11-30  152456.78  1.34  4.68
2021-01-31  158234.56  2.89  4.56
2021-03-31  161456.78  3.23  6.06
2021-05-31  164278.90  2.78  5.52
2021-07-31  166489.12  2.34  4.85
2021-09-30  162189.12  1.67  3.45
2021-11-30  159456.78  1.45  4.59
2022-01-31  164234.56  3.12  3.79

Average MoM: 2.34%
Average YoY: 4.52%

Summary

Key Functions Summary

FunctionUse CaseExample
pd.to_datetime()Date conversionpd.to_datetime(df['date'])
.dt.year/month/dayDate attributesdf['date'].dt.year
.resample()Time series aggregation.resample('ME').sum()
.rolling()Moving statistics.rolling(7).mean()
.shift()Time shift.shift(1) (previous period data)

SQL to Pandas Comparison

SQLPandas
EXTRACT(YEAR FROM date)df['date'].dt.year
DATE_TRUNC('month', date)df['date'].dt.to_period('M')
date + INTERVAL '7 days'df['date'] + pd.Timedelta(days=7)
LAG(sales, 1)df['sales'].shift(1)
AVG(sales) OVER (ROWS 6 PRECEDING)df['sales'].rolling(7).mean()

Time Unit Codes

CodeMeaningExample
DDayresample('D')
WWeekresample('W')
MEMonth (end of month)resample('ME')
MSMonth (start of month)resample('MS')
QEQuarterresample('QE')
YEYearresample('YE')
hHourresample('h')

Next Steps

You’ve mastered date/time handling! Next, learn data reshaping techniques including pivot_table(), melt(), MultiIndex in Pivot and Reshaping.

Last updated on

🤖AI 모의면접실전처럼 연습하기