First study on WSGI and PASTE

1. WSGI

Basically WSGI is lower level than CGI which you probably know. But in difference to CGI, WSGI does scale and can work in both multithreaded and multi process environments because it’s a specification that doesn’t mind how it’s implemented.

For me, the python WSGI is like a powerful tool that working between web server(‘gateway’) and python methods(‘application’). I cut and paste the Introduction of WSGI from Python Web Server Gateway Interface http://cloud.centos.org/centos/7/images/

The server(‘gateway’) side invokes a callable object that is provided by the application side. The specifics of how that object is provided are up to the server or gateway. It is assumed that some servers or gateways will require an application’s deployer to write a short script to create an instance of the server or gateway, and supply it with the application object. Other servers and gateways may use configuration files or other mechanisms to specify where an application object should be imported from, or otherwise obtained.

In addition to “pure” servers/gateways and applications/frameworks, it is also possible to create “middleware” components that implement both sides of this specification. Such components act as an application to their containing server, and as a server to a contained application, and can be used to provide extended APIs, content transformation, navigation, and other useful functions.

Throughout this specification, we will use the term “a callable” to mean “a function, method, class, or an instance with a call method”. It is up to the server, gateway, or application implementing the callable to choose the appropriate implementation technique for their needs. Conversely, a server, gateway, or application that is invoking a callable must not have any dependency on what kind of callable was provided to it. Callables are only to be called, not introspected upon.

1.2. The Application/Framework Side

The application object is simply a callable object that accepts two arguments. The term “object” should not be misconstrued as requiring an actual object instance: a function, method, class, or instance with a call method are all acceptable for use as an application object. Application objects must be able to be invoked more than once, as virtually all servers/gateways (other than CGI) will make such repeated requests.

(Note: although we refer to it as an “application” object, this should not be construed to mean that application developers will use WSGI as a web programming API! It is assumed that application developers will continue to use existing, high-level framework services to develop their applications. WSGI is a tool for framework and server developers, and is not intended to directly support application developers.)

def simple_app(environ, start_response):
    """Simplest possible application object"""
    status = '200 OK'
    response_headers = [('Content-type', 'text/plain')]
    start_response(status, response_headers)
    return ['Hello world!\n']

1.3. The Server/Gateway Side

The server or gateway invokes the application callable once for each request it receives from an HTTP client, that is directed at the application. To illustrate, here is a simple CGI gateway, implemented as a function taking an application object. Note that this simple example has limited error handling, because by default an uncaught exception will be dumped to sys.stderr and logged by the web server.

import os, sys

def run_with_cgi(application):

    environ = dict(os.environ.items())
    environ['wsgi.input']        = sys.stdin
    environ['wsgi.errors']       = sys.stderr
    environ['wsgi.version']      = (1, 0)
    environ['wsgi.multithread']  = False
    environ['wsgi.multiprocess'] = True
    environ['wsgi.run_once']     = True

    if environ.get('HTTPS', 'off') in ('on', '1'):
        environ['wsgi.url_scheme'] = 'https'
    else:
        environ['wsgi.url_scheme'] = 'http'

    headers_set = []
    headers_sent = []

    def write(data):
        if not headers_set:
             raise AssertionError("write() before start_response()")

        elif not headers_sent:
             # Before the first output, send the stored headers
             status, response_headers = headers_sent[:] = headers_set
             sys.stdout.write('Status: %s\r\n' % status)
             for header in response_headers:
                 sys.stdout.write('%s: %s\r\n' % header)
             sys.stdout.write('\r\n')

        sys.stdout.write(data)
        sys.stdout.flush()

    def start_response(status, response_headers, exc_info=None):
        if exc_info:
            try:
                if headers_sent:
                    # Re-raise original exception if headers sent
                    raise exc_info[0], exc_info[1], exc_info[2]
            finally:
                exc_info = None     # avoid dangling circular ref
        elif headers_set:
            raise AssertionError("Headers already set!")

        headers_set[:] = [status, response_headers]
        return write

    result = application(environ, start_response)
    try:
        for data in result:
            if data:    # don't send headers until body appears
                write(data)
        if not headers_sent:
            write('')   # send headers now if body was empty
    finally:
        if hasattr(result, 'close'):
            result.close()

1.4 Middleware: Components that Play Both Sides

Note that a single object may play the role of a server with respect to some application(s), while also acting as an application with respect to some server(s). Such “middleware” components can perform such functions as:

  • Routing a request to different application objects based on the target URL, after rewriting the environ accordingly.
  • Allowing multiple applications or frameworks to run side-by-side in the same process
  • Load balancing and remote processing, by forwarding requests and responses over a network
  • Perform content postprocessing, such as applying XSL stylesheets

The presence of middleware in general is transparent to both the “server/gateway” and the “application/framework” sides of the interface, and should require no special support. A user who desires to incorporate middleware into an application simply provides the middleware component to the server, as if it were an application, and configures the middleware component to invoke the application, as if the middleware component were a server. Of course, the “application” that the middleware wraps may in fact be another middleware component wrapping another application, and so on, creating what is referred to as a “middleware stack”.

For the most part, middleware must conform to the restrictions and requirements of both the server and application sides of WSGI. In some cases, however, requirements for middleware are more stringent than for a “pure” server or application, and these points will be noted in the specification.

Here is a (tongue-in-cheek) example of a middleware component that converts text/plain responses to pig latin, using Joe Strout’s piglatin.py . (Note: a “real” middleware component would probably use a more robust way of checking the content type, and should also check for a content encoding. Also, this simple example ignores the possibility that a word might be split across a block boundary.)

from piglatin import piglatin

class LatinIter:

    """Transform iterated output to piglatin, if it's okay to do so

    Note that the "okayness" can change until the application yields
    its first non-empty string, so 'transform_ok' has to be a mutable
    truth value.
    """

    def __init__(self, result, transform_ok):
        if hasattr(result, 'close'):
            self.close = result.close
        self._next = iter(result).next
        self.transform_ok = transform_ok

    def __iter__(self):
        return self

    def next(self):
        if self.transform_ok:
            return piglatin(self._next())
        else:
            return self._next()

class Latinator:

    # by default, don't transform output
    transform = False

    def __init__(self, application):
        self.application = application

    def __call__(self, environ, start_response):

        transform_ok = []

        def start_latin(status, response_headers, exc_info=None):

            # Reset ok flag, in case this is a repeat call
            del transform_ok[:]

            for name, value in response_headers:
                if name.lower() == 'content-type' and value == 'text/plain':
                    transform_ok.append(True)
                    # Strip content-length if present, else it'll be wrong
                    response_headers = [(name, value)
                        for name, value in response_headers
                            if name.lower() != 'content-length'
                    ]
                    break

            write = start_response(status, response_headers, exc_info)

            if transform_ok:
                def write_latin(data):
                    write(piglatin(data))
                return write_latin
            else:
                return write

        return LatinIter(self.application(environ, start_latin), transform_ok)


# Run foo_app under a Latinator's control, using the example CGI gateway
from foo_app import foo_app
run_with_cgi(Latinator(foo_app))

2. PASTE and PASTE Deployer

Pyton PASTE is known as ‘a framework for web frameworks’, it wrapps WSGI to make it easy to manage and use. It includes CGI a simple Web server that can generate WSGI request. Paste has seperates to 3 packages:

  • Webob:wrapping request environment in WSGI
  • Paste Deploy: Deploy the WSGI server from configuration file
  • Paste Script, WebTest, ScriptType, INITools, Tempita, WaitForIt, WPHP, WSGIFilter, WSGIProxy。

In PASTE Deploy

  • paste.deploy.loadwsgi — — Load wsgi from configure file
  • paste.deploy.config — — configure
  • paste.deploy.converters — — Convert string configuration

A config file has different sections. The only sections Paste Deploy cares about have prefixes, like app:main or filter:errors – the part after the : is the “name” of the section, and the part before gives the “type”. Other sections are ignored.

The format is a simple INI format: name = value. You can extend the value by indenting subsequent lines. # is a comment.

Typically you have one or two sections, named “main”: an application section ([app:main]) and a server section ([server:main]). [composite:…] signifies something that dispatches to multiple applications [app:main]:define the name of the WSGI application, ‘main’ is the name of the given application.

  • config: another_config_file.ini#app_name == find the name in other configuration file.
  • egg: MyApp –> find tMyApp in egg 。
  • call: my.project –> myapplication call the method directly
  • use = myotherapp —>in section ‘myotherapp’ to call app
  • user_names = messages –> pass the user define parameter to app.

2.1 PASTE Deploy Examples

Firstly, this is the config.ini file:

[app:home]
paste.app_factory = apps:MyHome.app_factory
;define the application name as home, if this application is called, it will find method in app.py-->class Myhome to invoke
[app:home_use]
use=call:apps:MyHome_USE
;Directly call Myhome object in apps.py. The same as paste.app_factory = apps:MyHome.app_factory
[composite:pub]
;depatch the resquest according to the url. We use egg.Paster#urlmap to depatch different requests to different application according to the URL.
use = egg:Paste#urlmap
/: pub0
;root will be dipatched to the application called pub0
/V1:pub1
/V2:pub2

[app:pub0]
paste.app_factory = apps:Pub0.app_factory
;This is how pub0 handle the request


[filter-app:pub1]
paste.filter_factory = apps:PubFilter.factory
next = subpub
;we use filter-app to filter the requests, the requests which pass the filter will hand over to subpub.


[app:subpub]
paste.app_factory = apps:Pub1.app_factory

[pipeline:pub2]
pipeline = logip logmethod pubv2
;we use pipeline to pass a serial filters
[filter:logip]
paste.filter_factory = apps:LogIPFilter.factory

[filter:logmethod]
paste.filter_factory = apps:LogMethod.factory

[app:pubv2]
paste.app_factory = apps:Pub2.app_factory
#This is an example of passing user_define arguemnts to application
para1 = This is first
para2 = This is second

Then this is the apps.py source code

import os
import eventlet
from eventlet import wsgi, listen
from paste import deploy
from webob import Request
cfg_file='myconfig.ini'
server_list = [('home', 8001),('home_use',8002),('pub',8003)]
# We define 3 serviers here as home home_use and pub. The server can be found in [app:names]


class Pub0(object):
	#The build-in function make the object can be called like a function.
    def __call__(self, environ, start_response):
        start_response('200 OK', {("Content-type", "text/plain")})
        return 'Hello from PUB0\n'
	#cls will be assigned to the object of Pub0 when it is called
    @classmethod
    def app_factory(cls, global_conf, **local_conf):
        return cls()



class Pub1(object):
    def __call__(self, environ, start_response):
        start_response('200 OK', {("Content-type", "text/plain")})
        return 'Hello from PUB1\n'

    @classmethod
    def app_factory(cls, global_conf, **local_conf):
        return cls()



class Middleware(object):
    def __init__(self, app):
        self.app = app

    @classmethod
    def factory(cls, global_conf, **kwargs):
        def filter(app):
            return cls(app)
        return filter

class PubFilter(Middleware):
    #we define the middleware filters
    def __init__(self, app):
        super(PubFilter, self).__init__(app)

    def __call__(self, environ, start_response):
        req = Request(environ)
        if req.method == 'POST':
            start_response('200 OK', {("Content-type", "text/plain")})
            return 'Bad request\n'
        else:
            return self.app(environ, start_response)


    @classmethod
    def app_factory(cls, global_conf, **local_conf):
        return cls()


class Pub2(object):
    def __call__(self, environ, start_response):
        start_response('200 OK', {("Content-type", "text/plain")})
        return 'Hello from PUB2\n'

    @classmethod
    def app_factory(cls, global_conf, **local_conf):
        print "we print thr parameters from ini file: %s" % local_conf
        return cls()


class LogIPFilter(Middleware):
    def __init__(self, app):
        super(LogIPFilter, self).__init__(app)

    def __call__(self, environ, start_response):
        print 'request IP is: %s' % environ['REMOTE_ADDR']
        return self.app(environ, start_response)


class LogMethod(Middleware):
    def __init__(self, app):
        super(LogMethod, self).__init__(app)

    def __call__(self, environ, start_response):
        print 'Method is: %s' % environ['REQUEST_METHOD']
        return self.app(environ, start_response)


class MyHome(object):

    def __call__(self, environ, start_response):
        start_response('200 OK', {("Content-type", "text/plain")})
        return 'Hello from MyHome\n'
    @classmethod
    def app_factory(cls, global_conf, **local_conf):
        return cls()


class MyHome_USE(object):

    def __call__(self, environ, start_response):
        start_response('200 OK', {("Content-type", "text/plain")})
        return 'Hello from MyHome with USE\n'
    @classmethod
    def app_factory(cls, global_conf, **local_conf):
        return cls()

MyHome_USE=MyHome_USE.app_factory

if __name__ == '__main__':
    host = '127.0.0.1'
    servers = []
    for app_name, port in server_list:
        socket = listen((host, port))
        app = deploy.loadapp('config:%s' % os.path.abspath(cfg_file), app_name)

        print "%s is starting" % app_name
        servers.append(eventlet.spawn(wsgi.server, socket, app))

    for server in servers:
        server.wait()

2.2 TESTS

yb@yb-ThinkPad-T440p:~$ curl http://127.0.0.1:8001
Hello from MyHome
yb@yb-ThinkPad-T440p:~$ curl http://127.0.0.1:8001
Hello from MyHome
yb@yb-ThinkPad-T440p:~$ curl http://127.0.0.1:8001
Hello from MyHome
yb@yb-ThinkPad-T440p:~$ curl http://127.0.0.1:8001
Hello from MyHome

yb@yb-ThinkPad-T440p:~$ curl -d "anything" http://127.0.0.1:8003/V1
Bad request
yb@yb-ThinkPad-T440p:~$ curl -d "anything" http://127.0.0.1:8003/V1
Bad request
yb@yb-ThinkPad-T440p:~$ curl http://127.0.0.1:8003/V2
Hello from PUB2

request IP is: 127.0.0.1
Method is: GET

Virtualbox PXE Test

1. Introduction

Virtualbox is a great tool to do some test on PXE boot. There are two ways to build and test the PXE in Virtualbox, one is to set up the DHCP server, TFTP server all by yourself when applying the bridge or host only adapters. Another way is to take advantage of Virtualbox build-in DHCP server and TFTP server.

In order to focus on learning how PXE set up a machine, I choose NAT network to implement PXE test IDBEnvironment.

The whole process can be divided into two parts, the first one is Virtualbox configuration, the second part is netboot files preparation.

2. Virtualbox Configuration

Firstly, follow the virtualbox virtualmachine creation guide to creat a virtual machine. Pay attention to the configurations on Network. Network configuration

Set the virtual machine boot from the network firstly.

boot order

3. TFTP preparation and netboot file

For the Virtualbox, the software’s build-in tftp server will lookup files in

/home/currentuser/.config/VirtualBox

So, create a TFTP folder and put the netboot files in this floder.

I download the ubuntu netboot file from ubuntu

wit command: wget -r -np -nH -R index.html http://archive.ubuntu.com/ubuntu/dists/trusty-updates/main/installer-amd64/current/images/netboot/

to download the whole directory.

Then put the file under the TFTP folder

The Structure of TFTP folder is show as: TFTP folder

It is quite important to set the file name as ubuntu.pxe(ubuntu is the name of this virtual machina, and I will tell you why we should set this name)

4. Booting from virtualbox

Start the Virtual machine, and the Alt Text

#5. How it works The whole picture is show as below (The picture is from image_site) Alt whole_pic

##5.1 Booting from PXE When the computer is set to boot from PXE, BIOS will load the PXE firmware code from network card ROM to the memory. Then it hand over the conrol of the system to PXE.

##5.2 Obtain an IP address Then the computer(client) will ask for a IP address, DHCP server response the request and asign an unused IP address to the client The configuration of the DHCP is essential, it will tells the client where to find the tftp server and which file to load. Ususally, the DHCP server is configured as below.

default-lease-time 600;
max-lease-time 7200;
subnet 192.168.10.0 netmask 255.255.255.0 {
range 192.168.10.50 192.168.10.100;
option subnet-mask 255.255.255.0;
option routers 192.168.10.123;
option broadcast-address 192.168.10.255;
filename "ubuntu.pxe";
next-server 192.168.10.123;
                            }

You can find that this configuration point the tftp server is at 192.168.10.123, and the file to load from tftp server is ubuntu.pxe. This is the reason why I should rename the pxelinux.0 to ubuntu.pxe.

##5.3 Booting ubuntu.pex Then the ubuntu.pxe will take control of the computer, according to its default configuration, it will find the boot entry from given path, on my systrem, it will try to load pxelinux.cfg/default.

On pxelinux.cfg/default, it says: include ubuntu-installer/amd64/boot-screens/menu.cfg default ubuntu-installer/amd64/boot-screens/vesamenu.c32 prompt 0 timeout 0

Then you can find the cfg find in the given path. And finally, it will loads the configuration file liek:

label expert                                                                                                                       [0/389]
menu label Expert install
    kernel ubuntu-installer/amd64/linux
    append priority=low vga=788 initrd=ubuntu-installer/amd64/initrd.gz --- 
label cli-expert
menu label Command-^line expert install
    kernel ubuntu-installer/amd64/linux
    append tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false priority=low vga=788 initrd=ubuntu-installe
    Include ubuntu-installer/amd64/boot-screens/rqtxt.cfg

It is clear that the system will load the kernel and initrd file from given path.

##5.4 Booting from OS

Finally the system boots itself from kernel and initrd with passed parameters. In my example, it enters the install mode.

Creating Gre Network

1. Introduction

The gre network can help to build a point-point network that for the public gateway, it hides the detail of the network. In the following sections, I will show you how to configure the gre network through a practical case.

The network information are described below:

node 1(Centos): 9.119.43.71 no Internet Access mask 255.255.255.0 gateway 9.119.43.1

node 2(Ubuntu): 9.110.190.35 has Internet Access mask 255.255.255.0 gateway 9.110.190.1

I would like node 2 can have Internet access through node 1. As node 2 and node 1 are in different subnet, Snat cannot work directly. So one solution is to use gre + snat.

2. Close the firewall and insert the gre module

On node 1, close the firewall with:

systemctl stop firewalld
systemctl disable firewalld

On node 2, close the firewall with: ufw disable

Run command on both nodes to make sure gre module is inserted. modprobe gre lsmod|grep gre

3. Create the gre tunnels

Create gre on node 2:

 sudo ip tunnel add gre0 mode gre remote 9.119.43.71 local 9.110.190.35 ttl 255
 sudo ip link set gre0 up
 sudo ip addr add 10.10.10.2 peer 10.10.10.1 dev gre1

Create gre on node 1:

 sudo ip tunnel add gre0 mode gre remote 9.110.190.35 local 9.119.43.71 ttl 255
 sudo ip link set gre0 up
 sudo ip addr add 10.10.10.1 peer 10.10.10.2 dev gre1

4. Enalbe ipv4 forward

On node 2 run

Edit /etc/sys/net.ipv4.ip_forward = 1 and set:

net.ipv4.ip_forward = 1
sysctl -p

#5.Router table IPtables Configuration As the default gateway of node 1 is 9.119.43.1, so we should let all packages go through gre0 except the packages send to the 9.110.190.35. So set the rules below:

ip route add 9.110.190.35/32 via 9.119.43.1
route del default   (You shoud create the outging route firstly!!! or your machine cannot be reached anymore !!!!!)

At this moment your machine can only be accessed through node2!

route add default gw 10.10.10.2

Now node 1 has been set up to pass all the packages through gre0.

On the node2, add rules to forward packages from gre to eth2

  ip route add 10.10.10.0/24 dev gre0
  iptables -t nat -A POSTROUTING -s 10.10.10.0/255.255.255.0 -o eth2 -j MASQUERADE

Now nod 1 has the Internet access!

Path Sum II

Problem:

Given a binary tree and a sum, find all root-to-leaf paths where each path’s sum equals the given sum.

Solution

# Definition for a binary tree node.

# class TreeNode(object):

#     def __init__(self, x):

#         self.val = x

#         self.left = None

#         self.right = None

import copy

class Solution(object):

    

    def find_paths(self,root,sum,paths,path_tmp):

        

        path_tmp.append(root.val)

        sum = sum-root.val

        if root.left == None and root.right == None and sum == 0:

            paths.append(copy.deepcopy(path_tmp))

         

        else:

        

            if root.left != None:

                self.find_paths(root.left,sum,paths,path_tmp)

        

            if root.right != None:

                self.find_paths(root.right,sum,paths,path_tmp)

        

        sum = sum +root.val

        path_tmp.pop()

        return

    

    def pathSum(self, root, sum):

        """

        :type root: TreeNode

        :type sum: int

        :rtype: List[List[int]]

        """

        paths=[]

        path_tmp=[]

        

        if root != None:

            self.find_paths(root,sum,paths,path_tmp)

            

        return paths

Minimum Depth of Binary Tree

Problem:

Given a binary tree, find its minimum depth.

The minimum depth is the number of nodes along the shortest path from the root node down to the nearest leaf node.

Solution

# Definition for a binary tree node.
# class TreeNode(object):
#     def __init__(self, x):
#         self.val = x
#         self.left = None
#         self.right = None

class Solution(object):
    
    def minf(self,a,b):
        if a<b:
            return a
        else:
            return b
    
    def minDepth(self, root):
        """
        :type root: TreeNode
        :rtype: int
        """
        if root ==None:
            return 0
        lmin = self.minDepth(root.left)
        rmin = self.minDepth(root.right)
        
        if lmin==0 and rmin==0:
            return 1
        if lmin==0:
            lmin=INT_MAX
        if rmin==0:
            rmin=INT_MAX
        return self.minf(lmin,rmin)+1

Creating Openstack Glance Image

1. Use Public Cloud Image

The simplest way to prepare a glance image is to find one on the Internet.

here are some useful link to obtain a glance image:

Centos7 http://cloud.centos.org/centos/7/images/

Centos6 http://cloud.centos.org/centos/6/images/

Ubuntu http://cloud-images.ubuntu.com/

Fedorahttps://getfedora.org/en/cloud/download/

As these imges use cloud-init to do the initialization, usually, tenants should use the the key pairs to login the system.

Sometimes, we hope to use password to login the system, the scripts below can help you achieve this goal.

	#cloud-config
            password: mysecret
            chpasswd: { expire: False }
            ssh_pwauth: True


	#cloud-config
		chpasswd:
 		list: |
   			root:passw0rd
   			centos:stackops
 		expire: False
		ssh_pwauth: True

2. Customize the Public Cloud Image

Sometimes, we may need to customize the public cloud image. I will show you how to modify the cloud image to enable password login method.

1.1. Download the Cloud Image

download the cloud image from public website, in my case, I take centos7 for instance.

1.2. Install Guestfs Tools

 sudo apt-get install guestfs-tools

1.3. Mount cloud image

sudo guestmount -a CentOS-7-x86_64-GenericCloud-1503.qcow2 -i --rw /mnt

1.4. Modify the files

sudo su
cd /mnt
chroot /mnt
vim /etc/cloud/cloud.conf

To enable password login, just modify these lines:

disable_root: 1
ssh_pwauth:   0

To:

disable_root: 0
ssh_pwauth:   1

set root password:

password root ##    **1.5. Finish and Umount**
Exit the chroot environment and umount /mnt

After these steps, upload the image to the cloud and enjoy it:)

2. Export from Standarded Installation

2.1. Install the virt-manager

Firstly, intall the virt-manager to create and start the virtual machines.

2.1. Prepare a qcow2 file

Create a qcow2 file for the vms with command:

qemu-img create -f qcow2 /data/centos-6.4.qcow2 10G

2.1. Install the System

install the virtual machine with the help of virt-manager. Remember to hardisk format to qcow2 and make sure you have enable the network interface.

In order to implemet the function like auto resize disk size, and to obtain publick keys, please remember to install cloud-init packages and cloud-utils-growpart. On red hat system, try to run:

yum install -y cloud-init
yum install -y dracut-modules-growpart cloud-utils-growpart

On the RHEL sysyem, it may faild in package update, try to run command below:

yum clean all
yum distro-sync

2.2 Post work

Install the ACPI to support power management in virtual layer

# yum install acpid
# chkconfig acpid on

Disable zeroconf router

# echo "NOZEROCONF=yes" >> /etc/sysconfig/network

Configure the console

serial --unit=0 --speed=115200
terminal --timeout=10 console serial
# Edit the kernel line to add the console entries
kernel ... console=tty0 console=ttyS0,115200n8

Clean Mac address information and undefine the virtual machine manager by virt-manager

virt-sysprep -d centos7
virsh undefine centos7

congratulations!! your image is ready for uploading!