(DRAFT) Scaling a django project using celery and elasticsearch

Scalability has undoubtedly become an important asset not only for large software projects but also mid-level ones now. Indeed, when you have a look at developer job posts, you can see easily that the ability to build scalable software is one of the top desired skills.

Making a software scalable requires mostly more than one methodology and different technology stacks. At first, it may look difficult and require deep experience with those methods and stacks.  And yes, scalability is a serious topic. Although it is, in this post, I will show how to make a Django application scalable. In the end, you will learn fundamental concepts and approaches.

PLAN

1. create a django project which is a little product management system.

2. make adding product asynchronous instead of synchronous

3. sync elasticsearch with the primary database (SQLite)

4. use elasticsearch for autocomplete and real-time analysis

 

1. Product management app with Django

Since I assume you are familiar with Python and Django, I am not going into every detail of making a Django app, instead, I will give you necessary operations specific to this tutorial.

Start a Django project and an app named products.

First, define a product model as follows:

..and corresponding form:

Define the following view for your product:

and add this view into Django’s router:

We are done for now with our very basic Django app that accepts post requests including a valid production title and image_url. Let’s test it via CURL.

As you can see, our application does all the operations real-time, in a blocking way. Blockages durations may change from app to app but we will be bothered anyway even it is 3 seconds, we would like not to wait for the application, want it to be responsive. This leads us to devise a way to make all the slow, resource-heavy operations in an asynchronous manner.  Let the client get the response immediately, saying its request seems valid but will be processed later. At this point, Celery comes to help make operations asynchronously, maybe scheduled if desired.

2. evolving to asynchronous operations by using Celery

First, you need to define the Celery client. Celery needs a message broker, and most used ones are Redis or Rabbitmq. We will use Redis for the sake of simplicity. However Redis may not be a good choice in production, so please read broker’s pros and cons before using. I will not go deeper into how Celery works or how to optimize it. Please refer for more information.

add the following imports into __init__.py to make the celery app instance defined above available through the project:

and lastly add this celery directives into settings file:

now our celery app instance is ready and we can define asynchronous operations.

Let’s first make  do_other_slow_work function asynchronous. To do so, we just decorate our function with Celery’s decorators and invoke this function as do_other_slow_work.delay(...) instead of do_other_slow_work(...), and we are done.

start celery workers that will do the actual works by running the tasks:

and send a request to test the system:

As you can see, we did not have to wait for 5 seconds to get a response.  do_other_slow_work.delay(...) invocation put a task to be run later into celery’s queue. Celery’s workers watch the queues and when a new task is available in a queue, they start to run it. It is possible to tune Celery, such as how many Celery workers will be run in parallel. We may even want to run some specific tasks on a remote machine, or give priority to some tasks, for instance.

At this point, we have had a big step to scale adding product operation. Under a heavy load of adding product operations, we can increase the number of celery workers running in parallel.

A little bit more scalability

You may notice that inserting product into the database is still a blockage which can be regarded as infinitesimal for now. However, since database transactions is an IO operations, they will slow down the application under too many requests.  In theory, it is a good idea to run all IO operations(network, database, file etc.) asynchronously; but in practice, do not forget Knut’s famous statement:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3% — Donald Knut

To hand over our new product insertion operation to Celery’s worker, we will do more or less same modifications as follows: we move the code segment into a separate function, decorate this function, and call this function like f.delay()

So far, we have put potentially slow code segments into task functions and run them with celery workers that can work in parallel on different machines. That is, we have the ability to scale product insertion process.

3. sync elasticsearch with the primary database (SQLite)

preparing…

4. use elasticsearch for autocomplete and real-time analysis

preparing…

Conclusions and take aways

preparing…

 


speech works

http://www.ispeech.org/text.to.speech?

https://www.ivona.com/

http://www.naturalreaders.com/index.html


Memory organization and cache management

In computer systems, caches are used to diminish the slowness of main memory which is larger and cheaper than cache memory.  Caches are faster because not far from cpu and not as large as main memory. In modern computers there may be more than one cache memory. The layers composed a memory hierarchy.

 

Screen Shot 2015-10-31 at 2.35.06 PM

The memory hierarchy is sometimes called an illusion to processors, because a processor can have as large and fast a memory as it needs at the level of reasonable cost. In short, from a broader perspective it is easy to understand why memory hierarchies are needed; however, to understand how it works you need to narrow the perspective, e.g. taking account only two memory layers at one time.

Key concepts are:

  • cache miss and hit ratios.
  • memory and cache and average access speeds
  • memory size, cache size
  • block or cache line size, set size
  • direct mapping, fully associative mapping, set associative mapping
  • address partitioning  (tag bits, index/set bits, block bits)
  • replacement algorithms such as LRU and random

On the internet, there are a lot of successful resources trying to explain how memory hierarchy works. I will share some of them I came acrossed.

by Luis Ceze fromWashington University. (from the video 8.1 to 8.6th)

For memory mapping techniques, the following videos will be helpful:


Memory design and virtual memory

In the class of SWE 514 Computer Systems, I am studying low-level concepts about cpu and memory design. Although I am totally abstracted from them in daily programming life, I think If you are a software developer it is good to catch at least a glimpse of how memory management works, regardless you write low-level or high-level programs. In many respect, management burdens are being carried out by operation systems; however, sometimes having knowledge about these may help you in terms of performance or security issues.  I am not going to explain what the memory management is here; instead, I will share some useful resources.

The article titled “Principles of virtual memory” by Carl Burch and Hendrix College seems to me that it covers most of sides of virtual memory concept very concisely. After reading it, you will understand why virtual memory exists and which parts of it are actually crucial.

In the following youtube playlist, virtual memory concept is discussed as why we need it and what parts it consists of. You can find general information about paging, page frames, address translation, translation lookaside buffer(TLB), caches, replacement methods  by watching the list from the first video to the 14th one.

 

This article published by University of Maryland covers the conceptual outlines of virtual memory and other related things. There are  also other lecture notes about cpu and memory design.

Others:

  • https://www.youtube.com/watch?v=WxYiXDSyiZ0
  • https://www.youtube.com/watch?v=0aHuj2BNsk0
  • https://www.youtube.com/watch?v=DlDBqHuvAUw

Injection libraries for Java, Android: Butterknife & Roboguice

In this article I will show you how to inject Android views using Roboguice and Butterknife, and how to use dependency injection using Roboguice. You can infer that Butterknife is not a dependency injection library.

Why do we need them?

If you start to develop programs beyond “hello world” with java or all other languages, you probably want to get rid of some repetitive, cumbersome boilerplate coding parts. Indeed, we should focus on logic, not on meta programming.

For example, in Android development, you get view references as follows:

In this code, we only want to change a textView’s text and show the labels of the two buttons. What we want is called our program’s logic. As it is obvious we have to write a lot of statements to achieve such a simple task. Now try to guess what happens if you have a lot of views, more than just two…

Butterknife helps us to focus on logic

With this library, it is easier for us to create view references. So you can write a shorter version of the above example with Butterknife like this:

 

It is needles to say that we can do the same task with less code. And less code is better code. But not too much less to sacrifice code readability.

For more information about Butterknife you can visit the pages below:
1. https://github.com/JakeWharton/butterknife 
2. http://jakewharton.github.io/butterknife/

Roboguice has more than Butterknife does

So far, we’ve seen how to make view injection. We accessed views in a shorter way. But all the things we do is not accessing views. Most of application has more than a few views; probably some system services, classes and objects will be required too.

Roboguice is here to strip away boilerplate code every application needs. For example, if you want to create a new android activity, it is mandatory to write a constructor method and specify which layout the activity will use.

Roboguice can do this task injecting resources.

Or it can inject system services as follows:

For more information about Roboguice you can visit https://github.com/roboguice/roboguice

Wrapping up

If you are sick of referencing views, go use Butterknife. It will save considerable time in long run. If you want more, use Roboguice. But be aware of that there is a performance difference between these as Roboguice does it jobs in runtime whereas Buttherknife does in compile time. It means Roboguice is the slower one. Buttherknife does not have any performance impact.

1. http://java.dzone.com/articles/dependency-injection-roboguice
2. http://stackoverflow.com/questions/27180820/difference-between-roboguice-and-butter-knife-dependency-injection


Never Run Your Mongodb as Root User

We have recently had a “to many open files” problem in one of our Mongodb servers. Mongodb kept saying “Out of file descriptors. Waiting one second before trying to accept more connections.” and accept no more connections.

The first thing came to our mind was that the reason is too much load, our mongodb servers were no longer enough to carry it. But it was wrong. After first research it became clear that it was because of linux’s user limits, so our wrong decision: running mongodb servers with the root user.

In linux systems, every user has some limits which decide how much system resources they are allowed to consume. These limits prevent one devil user to drain all the system and to make other users to starve of resources.

In our linux servers, root user had lower limits than mongodb’s suggested limits. Even if the root user had the suggested limits, we could have had the problem any way. Because root user is responsible for other operational works, not only for running mongodb. So, the actual limits of Mongodb were much lower.

Solving this issue was easy, we only had to increase the root’s limits. But you cannot know how much of these limits goes to mongodb, because root’s responsibilities is very open to be changed.

The best solution is to run Mongodb under a user who has suggested limits, and whose responsibility is only to run Mongodb.

For more information visit the following page: http://docs.mongodb.org/manual/reference/ulimit/


Euclidean Distance (similarity)

Bir kümedeki elementlerin benzerliğini ölçen, benzerlik puanı üreten avantajı ve dezavantajı duruma göre değişen algoritmalar bulunmaktadır.

Collaborative Intelligence kitabında Euclidean Distance hesaplaması ile film eleştirmenleri arasındaki benzerlik hesaplanmıştır.

Senaryoda, kişiler çeşitli filmlere puan vermişlerdir ve bu puanlar üzerinden bir kişinin diğer insanlarla arasındaki benzerlik hesaplanmıştır.

Birkaç ekleme dışında kodların tamamını kitaptaki gibi aktarıyorum.

Bu kodu recommendations.py olarak kaydedip çalıştırırsanız “Mustafa” ile diğer kişiler arasındaki film zevklerinin yakınlık değerini görebilirsiniz. 1 birebir aynı, 0 ise hiçbir şekilde benzemiyor demektir.

Burada filmler üzerinden hesaplama yapılıyor fakat film yerine etiket, müzik veya başka bir birim de kullanabilirdik. Bu yöntemi kullanabilmek için tek gereken birimleri sayısal olarak yazabiliyor olmamız gerekir.

Euclidean hesaplaması aslında hipotenüs hesaplmasıdır. x, y düzlemindeki iki nokta, bir üçgen hipotenüsünün iki ucu olarak kabul edilir ve hipotenüsü yani iki nokta arasındaki mesafeyi ölçen
sqrt(power(nokta1, 2)+power(nokta2, 2)) formül uygulanır.

Euclidean Distance ile ilgili olan bağlantıları aşağıda listeledim:

http://www.cut-the-knot.org/pythagoras/DistanceFormula.shtml

http://en.wikipedia.org/wiki/Euclidean_distance

http://www.econ.upf.edu/~michael/stanford/maeb4.pdf


Code is what, not how

Code is known to be written something by programmers for computers to do some tasks.

Currently commanding by a language which is near to computers is not necessary for most use-cases. When we create a file on a disk, in most use cases we do not care how this process is carried out. When we fetch an URL’s content, we enjoy concise statements like ‘fetch_content(url)’, get_url_content(url)’.

We are not interested in how, but what. We are migrating from the imperative paradigm to the declarative one. So that we invented high level languages such as Python, C#, Java, PHP; which are more similar to a natural language.


Predicting game results from fans’ emotions

Facebook’s data science team published a post in which they claim it is possible to predict a football team’s performance mining emotions of its fans.

The post tells us before a kick-off, fans seem positive. But the volume of positive emotion is higher in the fans who support the better team. After the match, the winner’s fans feel very high positive emotion, on the other hand other team’s fans feel negative.

tumblr_inline_n09rtvpUuV1qav1im

Actually we do not need any study to predict this correlation between teams and their fans. It is obvious when a team wins, its fans get happy and so they post positive status messages.

Yet I think this study is not trivial. Every study does not necessarily bring about a breakthrough.

 


Migrating from PHP to Python

At last, I’ve made a strong decision to migrate from PHP to Python. Leaving a tool, leaving your bread basket would not be an easy decision if you used it almost for eight years. Otherwise, I believe that sometimes with a little bit courage to leave your conform zone and a lot of curiosity you can get a better bread basket, maybe a jar of jam too.

if you want to be a sea, you must give up being a drop.

Why am I leaving PHP? Actually I am not leaving PHP totally, but switching my primary language to Python because my works have been changing. Most of work required from me is related to natural language processing(NLP), machine learning(ML) techniques. And I, and many other developers, find Python much more capable to deal with this kind of works. It has already many libraries, tools, frameworks for NLP and ML.

Another reason is that I think web programming will not grow as much as intelligent system. Maybe I am wrong, but there is no doubt the future is about intelligent system. And I want to make some works in this field.

And there is another programming language in which I am going to be efficient: java. Because it is the language of Android, the language of intelligent widgets such as google glass.

Anyway, I must continue to studying python, see ya!