Window.rowsBetween - only consider rows fulfilling a specific condition (e.g. not being null)












3















Problem



I have a Spark DataFrame with a column which contains values not for every row, but only for some rows (on a somewhat regular basis, e.g. only every 5 to 10 rows based on the id).



Now, I would like to apply a window function to the rows containing values involving the two previous and two following rows which also contain values (so basically pretending that all the rows containing nulls don't exist = don't count towards the rowsBetween-range of the window). In practice, my effective window size could be arbitrary depending on how many rows containing nulls exist. However, I always need exactly two values before and after. Also, the end result should contain all rows because of other columns which contain important information.



Example



For example, I want to calculate the sum over the previous two, the current and the next two (not-null) values for rows in the follwing dataframe which are not null:



from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql import Row

df = spark.createDataFrame([Row(id=i, val=i * 2 if i % 5 == 0 else None, foo='other') for i in range(100)])
df.show()


Output:



+-----+---+----+
| foo| id| val|
+-----+---+----+
|other| 0| 0|
|other| 1|null|
|other| 2|null|
|other| 3|null|
|other| 4|null|
|other| 5| 10|
|other| 6|null|
|other| 7|null|
|other| 8|null|
|other| 9|null|
|other| 10| 20|
|other| 11|null|
|other| 12|null|
|other| 13|null|
|other| 14|null|
|other| 15| 30|
|other| 16|null|
|other| 17|null|
|other| 18|null|
|other| 19|null|
+-----+---+----+


If I just use a Window function over the dataframe as is, I can't specify the condition that the values must not be null, so the window only contains null values making the sum equal to the row value:



df2 = df.withColumn('around_sum', F.when(F.col('val').isNotNull(), F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id')))).otherwise(None))
df2.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 0|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 10|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 20|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 30|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


I was able to achieve the desired result by creating a second dataframe only containing the rows where the value is not null, doing the window operation there and later joining the result again:



df3 = df.where(F.col('val').isNotNull())
.withColumn('around_sum', F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
.select(F.col('around_sum'), F.col('id').alias('id2'))
df3 = df.join(df3, F.col('id') == F.col('id2'), 'outer').orderBy(F.col('id')).drop('id2')
df3.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 30|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 60|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 100|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 150|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


Question



Now I am wondering whether I can get rid of the join (and the second DataFrame) somehow and instead specifiy the condition in the Window function directly.



Is this possible?










share|improve this question























  • Did you consider with fillna() ?

    – Bala
    Nov 20 '18 at 17:44













  • @Bala: Not sure how I could use fillna() for this purpose - I don't really want to fill the null values. I also looked at using last/first, but I haven't found a solution yet which can handle more than +/- one values "around" (but I need two in this case).

    – Matthias
    Nov 21 '18 at 11:29
















3















Problem



I have a Spark DataFrame with a column which contains values not for every row, but only for some rows (on a somewhat regular basis, e.g. only every 5 to 10 rows based on the id).



Now, I would like to apply a window function to the rows containing values involving the two previous and two following rows which also contain values (so basically pretending that all the rows containing nulls don't exist = don't count towards the rowsBetween-range of the window). In practice, my effective window size could be arbitrary depending on how many rows containing nulls exist. However, I always need exactly two values before and after. Also, the end result should contain all rows because of other columns which contain important information.



Example



For example, I want to calculate the sum over the previous two, the current and the next two (not-null) values for rows in the follwing dataframe which are not null:



from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql import Row

df = spark.createDataFrame([Row(id=i, val=i * 2 if i % 5 == 0 else None, foo='other') for i in range(100)])
df.show()


Output:



+-----+---+----+
| foo| id| val|
+-----+---+----+
|other| 0| 0|
|other| 1|null|
|other| 2|null|
|other| 3|null|
|other| 4|null|
|other| 5| 10|
|other| 6|null|
|other| 7|null|
|other| 8|null|
|other| 9|null|
|other| 10| 20|
|other| 11|null|
|other| 12|null|
|other| 13|null|
|other| 14|null|
|other| 15| 30|
|other| 16|null|
|other| 17|null|
|other| 18|null|
|other| 19|null|
+-----+---+----+


If I just use a Window function over the dataframe as is, I can't specify the condition that the values must not be null, so the window only contains null values making the sum equal to the row value:



df2 = df.withColumn('around_sum', F.when(F.col('val').isNotNull(), F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id')))).otherwise(None))
df2.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 0|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 10|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 20|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 30|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


I was able to achieve the desired result by creating a second dataframe only containing the rows where the value is not null, doing the window operation there and later joining the result again:



df3 = df.where(F.col('val').isNotNull())
.withColumn('around_sum', F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
.select(F.col('around_sum'), F.col('id').alias('id2'))
df3 = df.join(df3, F.col('id') == F.col('id2'), 'outer').orderBy(F.col('id')).drop('id2')
df3.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 30|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 60|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 100|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 150|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


Question



Now I am wondering whether I can get rid of the join (and the second DataFrame) somehow and instead specifiy the condition in the Window function directly.



Is this possible?










share|improve this question























  • Did you consider with fillna() ?

    – Bala
    Nov 20 '18 at 17:44













  • @Bala: Not sure how I could use fillna() for this purpose - I don't really want to fill the null values. I also looked at using last/first, but I haven't found a solution yet which can handle more than +/- one values "around" (but I need two in this case).

    – Matthias
    Nov 21 '18 at 11:29














3












3








3








Problem



I have a Spark DataFrame with a column which contains values not for every row, but only for some rows (on a somewhat regular basis, e.g. only every 5 to 10 rows based on the id).



Now, I would like to apply a window function to the rows containing values involving the two previous and two following rows which also contain values (so basically pretending that all the rows containing nulls don't exist = don't count towards the rowsBetween-range of the window). In practice, my effective window size could be arbitrary depending on how many rows containing nulls exist. However, I always need exactly two values before and after. Also, the end result should contain all rows because of other columns which contain important information.



Example



For example, I want to calculate the sum over the previous two, the current and the next two (not-null) values for rows in the follwing dataframe which are not null:



from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql import Row

df = spark.createDataFrame([Row(id=i, val=i * 2 if i % 5 == 0 else None, foo='other') for i in range(100)])
df.show()


Output:



+-----+---+----+
| foo| id| val|
+-----+---+----+
|other| 0| 0|
|other| 1|null|
|other| 2|null|
|other| 3|null|
|other| 4|null|
|other| 5| 10|
|other| 6|null|
|other| 7|null|
|other| 8|null|
|other| 9|null|
|other| 10| 20|
|other| 11|null|
|other| 12|null|
|other| 13|null|
|other| 14|null|
|other| 15| 30|
|other| 16|null|
|other| 17|null|
|other| 18|null|
|other| 19|null|
+-----+---+----+


If I just use a Window function over the dataframe as is, I can't specify the condition that the values must not be null, so the window only contains null values making the sum equal to the row value:



df2 = df.withColumn('around_sum', F.when(F.col('val').isNotNull(), F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id')))).otherwise(None))
df2.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 0|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 10|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 20|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 30|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


I was able to achieve the desired result by creating a second dataframe only containing the rows where the value is not null, doing the window operation there and later joining the result again:



df3 = df.where(F.col('val').isNotNull())
.withColumn('around_sum', F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
.select(F.col('around_sum'), F.col('id').alias('id2'))
df3 = df.join(df3, F.col('id') == F.col('id2'), 'outer').orderBy(F.col('id')).drop('id2')
df3.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 30|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 60|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 100|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 150|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


Question



Now I am wondering whether I can get rid of the join (and the second DataFrame) somehow and instead specifiy the condition in the Window function directly.



Is this possible?










share|improve this question














Problem



I have a Spark DataFrame with a column which contains values not for every row, but only for some rows (on a somewhat regular basis, e.g. only every 5 to 10 rows based on the id).



Now, I would like to apply a window function to the rows containing values involving the two previous and two following rows which also contain values (so basically pretending that all the rows containing nulls don't exist = don't count towards the rowsBetween-range of the window). In practice, my effective window size could be arbitrary depending on how many rows containing nulls exist. However, I always need exactly two values before and after. Also, the end result should contain all rows because of other columns which contain important information.



Example



For example, I want to calculate the sum over the previous two, the current and the next two (not-null) values for rows in the follwing dataframe which are not null:



from pyspark.sql.window import Window
import pyspark.sql.functions as F
from pyspark.sql import Row

df = spark.createDataFrame([Row(id=i, val=i * 2 if i % 5 == 0 else None, foo='other') for i in range(100)])
df.show()


Output:



+-----+---+----+
| foo| id| val|
+-----+---+----+
|other| 0| 0|
|other| 1|null|
|other| 2|null|
|other| 3|null|
|other| 4|null|
|other| 5| 10|
|other| 6|null|
|other| 7|null|
|other| 8|null|
|other| 9|null|
|other| 10| 20|
|other| 11|null|
|other| 12|null|
|other| 13|null|
|other| 14|null|
|other| 15| 30|
|other| 16|null|
|other| 17|null|
|other| 18|null|
|other| 19|null|
+-----+---+----+


If I just use a Window function over the dataframe as is, I can't specify the condition that the values must not be null, so the window only contains null values making the sum equal to the row value:



df2 = df.withColumn('around_sum', F.when(F.col('val').isNotNull(), F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id')))).otherwise(None))
df2.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 0|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 10|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 20|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 30|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


I was able to achieve the desired result by creating a second dataframe only containing the rows where the value is not null, doing the window operation there and later joining the result again:



df3 = df.where(F.col('val').isNotNull())
.withColumn('around_sum', F.sum(F.col('val')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
.select(F.col('around_sum'), F.col('id').alias('id2'))
df3 = df.join(df3, F.col('id') == F.col('id2'), 'outer').orderBy(F.col('id')).drop('id2')
df3.show()


Result:



+-----+---+----+----------+
| foo| id| val|around_sum|
+-----+---+----+----------+
|other| 0| 0| 30|
|other| 1|null| null|
|other| 2|null| null|
|other| 3|null| null|
|other| 4|null| null|
|other| 5| 10| 60|
|other| 6|null| null|
|other| 7|null| null|
|other| 8|null| null|
|other| 9|null| null|
|other| 10| 20| 100|
|other| 11|null| null|
|other| 12|null| null|
|other| 13|null| null|
|other| 14|null| null|
|other| 15| 30| 150|
|other| 16|null| null|
|other| 17|null| null|
|other| 18|null| null|
|other| 19|null| null|
+-----+---+----+----------+


Question



Now I am wondering whether I can get rid of the join (and the second DataFrame) somehow and instead specifiy the condition in the Window function directly.



Is this possible?







python apache-spark pyspark window-functions






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 20 '18 at 16:46









MatthiasMatthias

9,78423676




9,78423676













  • Did you consider with fillna() ?

    – Bala
    Nov 20 '18 at 17:44













  • @Bala: Not sure how I could use fillna() for this purpose - I don't really want to fill the null values. I also looked at using last/first, but I haven't found a solution yet which can handle more than +/- one values "around" (but I need two in this case).

    – Matthias
    Nov 21 '18 at 11:29



















  • Did you consider with fillna() ?

    – Bala
    Nov 20 '18 at 17:44













  • @Bala: Not sure how I could use fillna() for this purpose - I don't really want to fill the null values. I also looked at using last/first, but I haven't found a solution yet which can handle more than +/- one values "around" (but I need two in this case).

    – Matthias
    Nov 21 '18 at 11:29

















Did you consider with fillna() ?

– Bala
Nov 20 '18 at 17:44







Did you consider with fillna() ?

– Bala
Nov 20 '18 at 17:44















@Bala: Not sure how I could use fillna() for this purpose - I don't really want to fill the null values. I also looked at using last/first, but I haven't found a solution yet which can handle more than +/- one values "around" (but I need two in this case).

– Matthias
Nov 21 '18 at 11:29





@Bala: Not sure how I could use fillna() for this purpose - I don't really want to fill the null values. I also looked at using last/first, but I haven't found a solution yet which can handle more than +/- one values "around" (but I need two in this case).

– Matthias
Nov 21 '18 at 11:29












1 Answer
1






active

oldest

votes


















0














A good solution will be to start with a filling the nulls with 0 and then perform the operations. Do the fillna only on the column involved, like this:



df = df.fillna(0,subset=['val'])


If you are not sure if you want to get rid of the nulls, copy the column value and then calculate the window over that column, so you can get rid of it after the operation.



Like this:



df = df.withColumn('val2',F.col('val'))
df = df.fillna(0,subset=['val2'])
# Then perform the operations over val2.
df = df.withColumn('around_sum', F.sum(F.col('val2')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
# After the operations, get rid of the copy column
df = df.drop('val2')





share|improve this answer


























  • Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

    – Matthias
    Nov 27 '18 at 14:04













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53397690%2fwindow-rowsbetween-only-consider-rows-fulfilling-a-specific-condition-e-g-no%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














A good solution will be to start with a filling the nulls with 0 and then perform the operations. Do the fillna only on the column involved, like this:



df = df.fillna(0,subset=['val'])


If you are not sure if you want to get rid of the nulls, copy the column value and then calculate the window over that column, so you can get rid of it after the operation.



Like this:



df = df.withColumn('val2',F.col('val'))
df = df.fillna(0,subset=['val2'])
# Then perform the operations over val2.
df = df.withColumn('around_sum', F.sum(F.col('val2')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
# After the operations, get rid of the copy column
df = df.drop('val2')





share|improve this answer


























  • Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

    – Matthias
    Nov 27 '18 at 14:04


















0














A good solution will be to start with a filling the nulls with 0 and then perform the operations. Do the fillna only on the column involved, like this:



df = df.fillna(0,subset=['val'])


If you are not sure if you want to get rid of the nulls, copy the column value and then calculate the window over that column, so you can get rid of it after the operation.



Like this:



df = df.withColumn('val2',F.col('val'))
df = df.fillna(0,subset=['val2'])
# Then perform the operations over val2.
df = df.withColumn('around_sum', F.sum(F.col('val2')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
# After the operations, get rid of the copy column
df = df.drop('val2')





share|improve this answer


























  • Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

    – Matthias
    Nov 27 '18 at 14:04
















0












0








0







A good solution will be to start with a filling the nulls with 0 and then perform the operations. Do the fillna only on the column involved, like this:



df = df.fillna(0,subset=['val'])


If you are not sure if you want to get rid of the nulls, copy the column value and then calculate the window over that column, so you can get rid of it after the operation.



Like this:



df = df.withColumn('val2',F.col('val'))
df = df.fillna(0,subset=['val2'])
# Then perform the operations over val2.
df = df.withColumn('around_sum', F.sum(F.col('val2')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
# After the operations, get rid of the copy column
df = df.drop('val2')





share|improve this answer















A good solution will be to start with a filling the nulls with 0 and then perform the operations. Do the fillna only on the column involved, like this:



df = df.fillna(0,subset=['val'])


If you are not sure if you want to get rid of the nulls, copy the column value and then calculate the window over that column, so you can get rid of it after the operation.



Like this:



df = df.withColumn('val2',F.col('val'))
df = df.fillna(0,subset=['val2'])
# Then perform the operations over val2.
df = df.withColumn('around_sum', F.sum(F.col('val2')).over(Window.rowsBetween(-2, 2).orderBy(F.col('id'))))
# After the operations, get rid of the copy column
df = df.drop('val2')






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 27 '18 at 13:45

























answered Nov 27 '18 at 13:40









ManriqueManrique

522114




522114













  • Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

    – Matthias
    Nov 27 '18 at 14:04





















  • Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

    – Matthias
    Nov 27 '18 at 14:04



















Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

– Matthias
Nov 27 '18 at 14:04







Thanks for the response! The result looks very different from my expected result, however: When I run it, around_sum == value for my example dataframe.

– Matthias
Nov 27 '18 at 14:04






















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53397690%2fwindow-rowsbetween-only-consider-rows-fulfilling-a-specific-condition-e-g-no%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Guess what letter conforming each word

Port of Spain

Run scheduled task as local user group (not BUILTIN)