fbpx

Why does NodeJS scale insanely?

If you are new to NodeJS, you might have heard that NodeJS is single threaded. And you might have also heard that it is insanely scalable, serving millions of users at realtime. How does a single threaded application scale so well?

Single threading is half the truth

Yes, NodeJS follows a Single Thread Event Loop Model, but it is not actually single threaded. It works on an event based execution architecture.

NodeJS has a main thread and additional worker threads. Tasks that do not have to be serviced synchronously can be passed onto the worker threads. When worker threats are ready to be executed, they report back to the event loop. The event loop picks up an event and passes it to the main program stack for being the next in line for execution.

This provides a single threaded, but sudo parallel execution environment.

Understanding NodeJS Execution

const request = require('request');
let f1 = function() {
  console.log('Hello at beginning');
  request('https://google.com', (err, res, body) => {
    console.log('Hello from function');
  });
  console.log('Hello at end');
}

f1();

If we executed the above code in a procedural manner, we would expect the following output.

Hello at beginning
Hello from function
Hello at end

However your NodeJS application will show the following output.

Hello at beginning
Hello at end
Hello from function

Why is this so? Why does the request() line execute after the last console.log() statement? This is so because invoking request() is an asynchronous task. The execution of this task gets allotted to a worker thread. While the worker thread waits to get the response from google.com, the main thread can continue with further execution. This results in the last console output being printed while the worker thread is waiting for a response on the request.

When the worker thread does receive a response, it puts an entry into the event loop. When the main thread is free and doing nothing else, it picks up an event from the event loop and executes the tasks that was allotted to the worker. The event loop tasks are only executed when the main thread is free and not performing any other task.

NodeJS Async Execution
Call to request() passed on to a worker thread

So why is NodeJS insanely scalable?

This unique event based model prevents NodeJS from being blocked by any specific event. Each event is treated and processed independent of each other. This is only true as long as you don’t write code that blocks the main event thread.

Since async function calls report back to an event loop for execution when they are ready to be executed, the main thread is always busy doing something and never waiting on any task. A properly designed NodeJS application, can thereby keep the main event loop free from long running tasks, by passing long running tasks to worker threads.

This concept is very different than spawning new threads for executing tasks in parallel. There is a physical limit to the number of threads a system can execute. When this limit is reached, if individual threads are waiting for a long running operation to complete, all threads would essentially wait, thereby making the complete application slow.

On the contrary, in NodeJS, the main event loop only gets those tasks to execute that are ready to be executed. Thereby millions of concurrent events can be created, without affecting the performance of the main thread, thereby allowing for significant scalability of applications that are well designed.

NodeJS is turning out to be one of the preferred backend systems for web applications and web services.

Count number of elements in Iterator – Java

The most elegant way to get the size of an Iterator or the count of number of elements in an Iterator is by using the utility method provided within the Guava library.

int size = Iterators.size(myIterator);

myIterator must be an implementation of Iterator<T>. Here T can be of any type.

Note: The iterator will be consumed when the size is returned, so the same iterator may not be used for any other purpose.

Getting Guava Library

Library Source on GitHub

Maven

<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>27.0-jre</version>
  <!-- or, for Android: -->
  <version>27.0-android</version>
</dependency>

Gradle

dependencies {
  compile 'com.google.guava:guava:27.0-jre'
  // or, for Android:
  api 'com.google.guava:guava:27.0-android'
}

 

Create String from an array in Java

String.join(",", collection);

The above statement uses join function within the String class to join the elements of the collection into the String. The elements are joint using the specified delimiter which is a “,” in this case.

final List<String> collection = new ArrayList<>();
collection.add("Hello");
collection.add("World");

final String joinedString = String.join(",", collection);

System.out.println(joinedString);

The above examples joins an array list collection of Strings to create a single comma delimited string.

Output

Hello,World

Integer Collection to String

The String.join function only works on String collections. However if you do happen to have a collection of Integers and need to join these integers into a single string delimited by comma, the below example shows how.

final  List<Integer> intList = new ArrayList<>();
intList.add(10);
intList.add(20);
intList.add(30);

final String joinedIntString = String.join(",", intList.stream().map(x -> "" + x).collect(Collectors.toList()));

System.out.println(joinedIntString);

Output

10,20,30

The .map function maps each Integer element into a String element simply by appending the Integer to an empty string.

 

Increase Storage Space on AIX Platform

Use the chfs command to increase, decrease or set the size of a mounted volume / file system on AIX platform.

chfs -a size=54132736 /usr

Sets the size of the /usr mount point to the specified size in bytes. The size can also specified in short form. The below command will set the size of /usr to 25 GB.

chfs -a size=25G /usr

Instead of setting the size, one can also increase the size of a mount point. The below command increases the size of /usr by 1GB

chfs -a size=+1G /usr

Just the way a size increase is possible, a size decrease is also possible. The below command decreases the size of /usr by 1GB.

chfs -a size=-1G /usr

Key Considerations

To increase the size of a file system or mount point, you must have enough disk space available on the logical volume. The operation succeeds only if the necessary disk space is available.

When decreasing the size of a file system, you must have the necessary amount of free space on the file system. What this means is if you have a 25GB file system and have 20 GB of data loaded on it, a 2GB size decrease operation would succeed to have a revised sizes of 23GB. However if you had 24GB of data on the file system, you could reduce the size by only 1GB and not by 2GB.

Read Excel sheet in Java

The example below shows opening and reading Excel documents, using Apache POI library.

File file = new File("sample.xlsx");
Workbook workbook = WorkbookFactory.create(file);

workbook.sheetIterator().forEachRemaining(sheet -> {
    for(int i = 0; i < sheet.getLastRowNum(); i++) {
        Row row = sheet.getRow(i);
        for(int j = 0; j < row.getLastCellNum(); j++) {
            Cell cell = row.getCell(j);
            System.out.println(cell.getStringCellValue());
        }
    }
});

The above example prints a string value of all cells in a sequential order, for all sheets present inside the workbook.

Maven import for Apache POI library is mentioned below. The library can be downloaded from other sources as well.

<!-- https://mvnrepository.com/artifact/org.apache.poi/poi -->
<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi</artifactId>
    <version>3.17</version>
</dependency>

<!-- https://mvnrepository.com/artifact/org.apache.poi/poi-ooxml -->
<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi-ooxml</artifactId>
    <version>3.17</version>
</dependency>

The Workbook object represents a single Excel document. The Excel may contain multiple sheets within it, wherein each sheet is represented by the Sheet object.

workbook.sheetIterator().forEachRemaining(sheet -> {});

Is used to iterate over all sheets present inside the Workbook. It is possible for the Workbook to not have any sheet.

sheet.getLastRowNum();

Gets the number of rows present in the respective sheet. This value is used to iterate through all rows present in the sheet. The row count is expected to differ per sheet.

Row row = sheet.getRow(i);

Gets the complete row at the specified row index. The row indexes start from 0. A row is a collection of cells, where each cell is a column. A row does not contain data values. A Row contains cells and each Cell in the Row contains a data value.

row.getLastCellNum();

Is used to get the number of cells in the Row. Typically each row in the sheet is expected to have same number of cells. However if some rows have merged cells, the number of cells in one row maybe different than the number of cells in another row in the same sheet. It is thereby important to get the cell count for each row and then iterate through each cell in the respective row.

Cell cell = row.getCell(j);

Gets a single Cell present at position j from within the row. The cell index starts from 0.

cell.getStringCellValue();

Assuming the cell contains a String value, the same can be got by using the getStringCellValue() function on the cell object. The cell may however contain numerical data, date-time field or maybe a formula. The contents of the cell need to be picked up appropriately to prevent errors.

Access-Control-Allow-Origin setting in NodeJS

The example shows setting CORS on NodeJS Express engine based web services.

var router = express.Router();
router.options('/', function(req, res, next){

res.setHeader('Access-Control-Allow-Origin', '*');

res.setHeader('Access-Control-Allow-Methods', 'POST, GET, PUT, DELETE, OPTIONS');

res.setHeader('Access-Control-Allow-Credentials', false);

res.setHeader('Access-Control-Max-Age', '86400'); // 24 hours

res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, X-HTTP-Method-Override, Content-Type, Accept');

next();

});

Add support for the options mentioned inside every service implementation. The * indicates that requests will be allowed from any originating service. Such a configuration should be used in Sandbox / Test mode only.

For production use it is recommended to allow requests from a specific domain only, as in with the below code.

res.setHeader('Access-Control-Allow-Origin', '*.mywebsite.com');

 

Reverse a Java List, Array

Since Java 8, the order of elements in a List or its subclasses such as ArrayList can be reversed using a utility function provided as part of the Collections class.

Collections.reverse(list);

The list is reversed in place, which means the elements in the list are reversed inside the same object.

Collections.reverse(array); //DOES NOT WORK!

The implementation works only on an implementation of List. However if you would like to reverse the order of elements in an Array, and would like to do so with brevity of code without being concerned about expense of the operation, then a possible solution is as follows

List<Integer> list = Arrays.asList(array);
Collections.reverse(list);
array = list.toArray(new Integer[list.size()]);

The above code converts the array to list, reverses the list and then converts the list back to array. Please avoid this if possible, and use a List throughout your program instead of using Arrays.

JSON.equals in Java to compare two JSONs

JSON.areEqual(json1,json2)

The function returns true if the two JSON’s are equals and false if they are unequal. The parameter can be a JSONObject of type org.json.JSONObject or a JSON String.

The JSON utility is available as part of BlobCity Commons

Download JAR | View Source on GitHub

com.blobcity.json.JSON.areEqual("{}", "{}"); -> true
JSON.areEqual("{\"a\": \"1\"}", "{}"); -> false

The function checks for the complete JSON. Every element of the JSON must be equal for the equals check to pass. The following gives areEquals => false

{
  "name": "Tom",
  "country": "USA"
}
{
  "name": "Tom",
  "country": "US"
}

Deeps checks are also support. Nested JSON’s must be equal for the JSON to be equal. The below 2 conditions emails the same.

areEqual => true

{
  "name": "Tom",
  "country": "USA",
  "address": {
    "line1": "Lane 1, USA"
  }
}
{
  "name": "Tom",
  "country": "USA",
  "address": {
    "line1": "Lane 1, USA"
  }
}

areEqual => false

{
  "name": "Tom",
  "country": "USA",
  "address": {
    "line1": "Lane 1, USA"
  }
}
{
  "name": "Tom",
  "country": "USA",
  "address": {
    "line1": "My lane"
  }
}

areEqual => false

{
  "name": "Tom",
  "country": "USA",
  "address": {
    "line1": "Lane 1, USA"
  }
}
{
  "name": "Tom",
  "country": "USA",
  "address": {
    "line1": "Lane 1, USA",
    "zip": "19700"
  }
}

Array comparions are also supported, and array elements must be in same order in both JSON’s for the equals to pass.

areEqual => true

{
  "name": "Tom",
  "roles": ["admin", "user"]
}
{
  "name": "Tom",
  "roles": ["admin", "user"]
}

areEqual => false

{
  "name": "Tom",
  "roles": ["user", "admin"]
}
{
  "name": "Tom",
  "roles": ["admin", "user"]
}

 

Iterate Over Keys of JSONObject – Java

JSON’s are commonly used in many Java programs, with most common libraries being org.json and com.google.gson. The examples below show how to iterate over keys of both json objects in Java 8.

With com.json.JSONObject

jsonObject.keys().forEachRemaining(key -> System.out.println(key));
jsonObject.keySet().forEach(key -> System.out.println(key));
jsonObject.keySet().parallelStream().forEach(key -> System.out.println(key));

The above methods simply iterate through all the first level keys within a JSONObject and print them to the console. The .keys() function returns an Iterator and does not support parallel execution. The .keySet() function returns a Set of the keys and maybe integrated in a single thread or in parallel.

If printing to console was the only objective, the lambda expression can be better used as

jsonObject.keySet().forEach(System.out::println);

With com.google.gson.JsonObject

jsonObject.entrySet().forEach(entry -> System.out.println(entry.getKey()));
jsonObject.entrySet().parallelStream().forEach(entry -> System.out.println(entry.getKey()));

The Google’s JsonObject implementation provides for iteration over the key-value pairs inside the JsonObject. So it is not just the key, but the value also can be obtained using a single iteration.

jsonObject.entrySet().parallelStream().forEach(entry -> {
    System.out.println(entry.getKey());
    System.out.println(entry.getValue().getAsString());
});

BlobCity joins Docker Certification Program

An enterprise class multi-model and real-time analytics database can now be powered up right out of Docker containers.

India, 03 March 2017 – BlobCity is pleased to announce the availability of BlobCity DB Enterprise on the Docker Store. Today BlobCity DB has become the technology backbone chosen by many companies for their real-time analytics requirements. The enterprise license provides organisations of all sizes with access to a powerful, scalable and reliable database technology.

“We would like to congratulate BlobCity on their acceptance of the BlobCity DB into the Docker Certification  Program,” said Marianna Tessel, EVP, Strategic Development. “Enterprise IT teams are looking to Docker to provide recommendations and assurances on the ecosystem of container content, infrastructure and extensions. BlobCity’s inclusion into the program indicates that BlobCity DB, a real-time analytics database has been tested and verified by Docker, confirming for customers that BlobCity DB container images have been evaluated for security and are supported and built according to best practices.” 

About BlobCity

BlobCity is a multi-model real-time analytics database. It removes database as a concern from application architectures. It not only processes stored data at high speeds but also processes data in motion during on-going transactions.

Complete In-memory & On-disk storage engines

BlobCity DB offers dual storage methods, one in-memory and the second on-disk. Querying across data in-memory and on-disk has never been this easy. Dual storage allows you to split data between disk and memory data stores. This approach significantly improve cross query capabilities without a strain on the backend infrastructure budgets.

Hybrid Transactional / Analytical Processing

BlobCity DB can be used as a sole database backend to perform both online transaction processing and online analytical processing for the purpose of real-time operational intelligence processing.

Availability

BlobCity DB is available on Docker Store offering a full enterprise edition license. 1 month free trial available. Trial also comes with standard enterprise support. 

For more information on BlobCity DB, visit https://blobcity.com or visit BlobCity DB at the Docker Store – http://store.docker.com

All product and company names herein may be trademarks of their registered owners.