Top "Window-functions" questions

A window function is a type of SQL operation that aggregates over a partition of the result set.

PostgreSQL: running count of rows for a query 'by minute'

I need to query for each minute the total count of rows up to that minute. The best I could …

sql postgresql datetime aggregate-functions window-functions
pyspark: rolling average using timeseries data

I have a dataset consisting of a timestamp column and a dollars column. I would like to find the average …

apache-spark pyspark window-functions moving-average
Using window functions in an update statement

I have a large PostgreSQL table which I access through Django. Because Django's ORM does not support window functions, I …

sql django postgresql window-functions
Rank() over Partition by in mysql

I'm completely stumped as to create a new column "LoginRank" from rank() over(partition by x, order by y desc) …

mysql window-functions rank partition-by
PostgreSQL - how should I use first_value()?

This answer to shows how to produce High/Low/Open/Close values from a ticker: Retrieve aggregates for arbitrary time …

sql postgresql postgresql-9.2 window-functions
pyspark: count distinct over a window

I just tried doing a countDistinct over a window and got this error: AnalysisException: u'Distinct window functions are not supported: …

count pyspark window-functions distinct-values
Spark SQL window function with complex condition

This is probably easiest to explain through example. Suppose I have a DataFrame of user logins to a website, for …

sql apache-spark pyspark apache-spark-sql window-functions
T-SQL calculate moving average

I am working with SQL Server 2008 R2, trying to calculate a moving average. For each record in my view, I …

sql tsql sql-server-2008-r2 window-functions moving-average
Select random row for each group

I have a table like this ID ATTRIBUTE 1 A 1 A 1 B 1 C 2 B 2 C 2 C 3 A 3 B 3 C I'd like …

sql postgresql random group-by window-functions
Partitioning by multiple columns in PySpark with columns in a list

My question is similar to this thread: Partitioning by multiple columns in Spark SQL but I'm working in Pyspark rather …

apache-spark pyspark window-functions