Day 06 - Analyse Website traffic

The Giggle Analytics#

For the next five days, we will be building a mini-solution, similar to Google analytics, called as Giggle Analytics

GA Demographic!

Today we would be focusing on understanding the demographics of the website users, as shown below.

GA Demographic!

Understanding the Website events#

For this mini-solution, we will assume that we have received website logs from the client's browser in the form of json files with the following fields.

"ip_addr":"xx.xx.xx.x", //IP address of the user
"user_id": "u09", //The identity of the client
"timestamp": 1644218754, //The time that the request was received
"request": "GET /guides/100-days-of-spark/", //The request line that includes the HTTP method used, the requested resource path
"user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36", // Browser user agent
"status": 200, //The status code that the server sends back to the client
"size": 2048 //The size of the object requested

The field we are interested in, is the ip_addr, from which we'll find the location details using IP Location Finder by KeyCDN.

Here's a simple example of how to use this API using curl command.

curl -k -H "User-Agent: keycdn-tools:" ""

And the response is as below.

"status": "success",
"description": "Data successfully received.",
"data": {
"geo": {
"host": "",
"ip": "",
"rdns": "",
"asn": "",
"isp": "",
"country_name": "Colombia",
"country_code": "CO",
"region_name": null,
"region_code": null,
"city": null,
"postal_code": null,
"continent_name": "South America",
"continent_code": "SA",
"latitude": 4.5981,
"longitude": -74.0799,
"metro_code": null,
"timezone": "America\/Bogota",
"datetime": "2022-02-07 04:12:33"

The complete dataset will be like below

{"ip_addr":"", "user_id": "u09", "timestamp": 1644218754, "request": "GET /", "user_agent": "..","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u10", "timestamp": 1644218754, "request": "GET /", "user_agent": "..","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u11", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u12", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u13", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u14", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u15", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u16", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u17", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}
{"ip_addr":"", "user_id": "u18", "timestamp": 1644218754, "request": "GET /", "user_agent": "...","status": 200, "size": 2048}

Parse the website logs#

Let's parse the website logs and transform it into a dataframe that consists of the following fields :

  • country
  • user_id
  • ip_addr

Open the Spark Shell and import all the classes below. Create a function getCountry(ip: String) that fetches the corresponding country name using the keycdn API.


Use the :paste command in the shell to copy and paste the below and then press CTRL+D to execute all the pasted lines.

def getCountry(ip: String): String = {
val url = s"${ip}"
val USER_AGENT = "keycdn-tools:"
val obj = new URL(url)
val con = obj.openConnection.asInstanceOf[HttpURLConnection]
con.setRequestProperty("User-Agent", USER_AGENT)
val in = new BufferedReader(new InputStreamReader(con.getInputStream))
var inputLine: String = null
val response = new StringBuffer
inputLine = in.readLine()
while(inputLine != null){
inputLine = in.readLine()
val mapper = new ObjectMapper()

Now we'll read the website logs and transform the dataframe to include the country of the respective client, using the function as defined above. While transforming we'll be using the Dataset APIs for Scala and Java, and Dataframe APIs for Python.

What's difference between Dataframe and Dataset? And When should I use it, over the other?

Yeah! I know its a common interview question. Well the Dataset is for advanced use case, where transformation from one type to another is non-trivial, and Dataframe is for commonly used transformation that is defined by different SQL like operators - COUNT, AGGREGATE, SUM, GROUPBY, DATE_DIFF and so on.

For this example, we don't have a readily available function defined on the Dataframe to get country from an IP, therefore we have taken the route of using Dataset APIs.

The following table highlights the difference between different languages and how the transformation works

LanguageAPI Abstraction
ScalaDataset[T] & Dataframe ( ie. Dataset[Row])
PythonDataframe (Convert to rdd for non-trivial transformation)
  • Read the logs from the path /path/to/logs.json as a Dataframe.

  • Add a column, country which is fetched using the function getCountry that we defined above. This gives us a Dataset of String tuples with sequenced column names like below.

    org.apache.spark.sql.Dataset[(String, String, String)] = [_1: string, _2: string ... 1 more field]

    We will convert this dataset back into Dataframe using toDF(...) method, by specifying the column names. So you can consider that Dataframe is an alias of Dataset[Row]

scala> val websiteLogs ="/path/to/logs.json")
websiteLogs: org.apache.spark.sql.DataFrame = [ip_addr: string, request: string ... 5 more fields]
scala> val withCountry = => (
.toDF("user_id","ip", "country")
|user_id| ip| country|
| u09|| India|
| u10|| Greenland|
| u11||United States|
| u12|| Germany|
| u13||United States|
| u14|| India|
| u15||United States|
| u16|| China|
| u17|| Australia|
| u18||United States|

Count the users by location#

Lets count the total number of users, and then we will use this count to get the percentage of users that visited our website from each country. For getting the users for each country, we will use the groupBy function defined on the dataframe.

  • Get the total count of the users, assuming each user accessed the website once.
  • Calculate the total number of users, assuming each user accessed the website once. This is stored in total dataframe.
  • We get the total number of users by each country and then extract the field value using getAs[String/Long], that will be used to calculate the fraction of total users for that country.
  • Convert back into dataframe. Run the command to get a view of the end result.
scala> val total = withCountry.count()
scala> val stats = withCountry.groupBy("country")
.map(row => (row.getAs[String]("country"),
.toDF("Country","Users", "% Users")
| Country|Users|% Users|
| Germany| 1| 10.0|
| India| 2| 20.0|
|United States| 4| 40.0|
| China| 1| 10.0|
| Greenland| 1| 10.0|
| Australia| 1| 10.0|


Today we've seen how to use dataframe and dataset APIs to aggregate users across different country, thereby providing interesting insights to the website owners, that will help them to target the right audience. Analytical applications will definitely make softwares more intelligent and affective.

Therefore, stay connected with us to learn more through our Slack workspace.

What's next ?#

Tomorrow we'll identify which devices are used the most, for accessing the website.

Plan post day6!