Saturday 31 December 2011

my.cnf Debian suggestion

#BEGIN CONFIG INFO
#DESCR: 4GB RAM, InnoDB only, ACID, few connections, heavy queries
#TYPE: SYSTEM
#END CONFIG INFO

#
# This is a MySQL example config file for systems with 4GB of memory
# running mostly MySQL using InnoDB only tables and performing complex
# queries with few connections.
#
# You can copy this file to /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options
# (/var/lib/mysql for this installation) or to
# ~/.my.cnf to set user-specific options.
#
# In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.
#
# More detailed information about the individual options can also be
# found in the manual.
#

#
# The following options will be read by MySQL client applications.
# Note that only client applications shipped by MySQL are guaranteed
# to read this section. If you want your own MySQL client program to
# honor these values, you need to specify it as an option during the
# MySQL client library initialization.
#
[client]
#password       = [your_password]
port            = 3306
socket          = /var/run/mysqld/mysqld.sock

# *** Application-specific options follow here ***

#
# The MySQL server
#
[mysqld]

# generic configuration options
port            = 3306
socket          = /var/run/mysqld/mysqld.sock

# back_log is the number of connections the operating system can keep in
# the listen queue, before the MySQL connection manager thread has
# processed them. If you have a very high connection rate and experience
# "connection refused" errors, you might need to increase this value.
# Check your OS documentation for the maximum value of this parameter.
# Attempting to set back_log higher than your operating system limit
# will have no effect.
back_log = 50

# Don't listen on a TCP/IP port at all. This can be a security
# enhancement, if all processes that need to connect to mysqld run
# on the same host.  All interaction with mysqld must be made via Unix
# sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
skip-networking

# The maximum amount of concurrent sessions the MySQL server will
# allow. One of these connections will be reserved for a user with
# SUPER privileges to allow the administrator to login even if the
# connection limit has been reached.
max_connections = 100

# Maximum amount of errors allowed per host. If this limit is reached,
# the host will be blocked from connecting to the MySQL server until
# "FLUSH HOSTS" has been run or the server was restarted. Invalid
# passwords and other errors during the connect phase result in
# increasing this value. See the "Aborted_connects" status variable for
# global counter.
max_connect_errors = 10

# The number of open tables for all threads. Increasing this value
# increases the number of file descriptors that mysqld requires.
# Therefore you have to make sure to set the amount of open files
# allowed to at least 4096 in the variable "open-files-limit" in
# section [mysqld_safe]
table_cache = 2048

# Enable external file level locking. Enabled file locking will have a
# negative impact on performance, so only use it in case you have
# multiple database instances running on the same files (note some
# restrictions still apply!) or if you use other software relying on
# locking MyISAM tables on file level.
#external-locking

# The maximum size of a query packet the server can handle as well as
# maximum query size server can process (Important when working with
# large BLOBs).  enlarged dynamically, for each connection.
max_allowed_packet = 16M

# The size of the cache to hold the SQL statements for the binary log
# during a transaction. If you often use big, multi-statement
# transactions you can increase this value to get more performance. All
# statements from transactions are buffered in the binary log cache and
# are being written to the binary log at once after the COMMIT.  If the
# transaction is larger than this value, temporary file on disk is used
# instead.  This buffer is allocated per connection on first update
# statement in transaction
binlog_cache_size = 1M

# Maximum allowed size for a single HEAP (in memory) table. This option
# is a protection against the accidential creation of a very large HEAP
# table which could otherwise use up all memory resources.
max_heap_table_size = 64M

# Sort buffer is used to perform sorts for some ORDER BY and GROUP BY
# queries. If sorted data does not fit into the sort buffer, a disk
# based merge sort is used instead - See the "Sort_merge_passes"
# status variable. Allocated per thread if sort is needed.
sort_buffer_size = 8M

# This buffer is used for the optimization of full JOINs (JOINs without
# indexes). Such JOINs are very bad for performance in most cases
# anyway, but setting this variable to a large value reduces the
# performance impact. See the "Select_full_join" status variable for a
# count of full JOINs. Allocated per thread if full join is found
join_buffer_size = 8M

# How many threads we should keep in a cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there aren't
# more than thread_cache_size threads from before.  This greatly reduces
# the amount of thread creations needed if you have a lot of new
# connections. (Normally this doesn't give a notable performance
# improvement if you have a good thread implementation.)
thread_cache_size = 8

# This permits the application to give the threads system a hint for the
# desired number of threads that should be run at the same time.  This
# value only makes sense on systems that support the thread_concurrency()
# function call (Sun Solaris, for example).
# You should try [number of CPUs]*(2..4) for thread_concurrency
thread_concurrency = 8

# Query cache is used to cache SELECT results and later return them
# without actual executing the same query once again. Having the query
# cache enabled may result in significant speed improvements, if your
# have a lot of identical queries and rarely changing tables. See the
# "Qcache_lowmem_prunes" status variable to check if the current value
# is high enough for your load.
# Note: In case your tables change very often or if your queries are
# textually different every time, the query cache may result in a
# slowdown instead of a performance improvement.
query_cache_size = 64M

# Only cache result sets that are smaller than this limit. This is to
# protect the query cache of a very large result set overwriting all
# other query results.
query_cache_limit = 2M

# Minimum word length to be indexed by the full text search index.
# You might wish to decrease it if you need to search for shorter words.
# Note that you need to rebuild your FULLTEXT index, after you have
# modified this value.
ft_min_word_len = 4

# If your system supports the memlock() function call, you might want to
# enable this option while running MySQL to keep it locked in memory and
# to avoid potential swapping out in case of high memory pressure. Good
# for performance.
#memlock

# Table type which is used by default when creating new tables, if not
# specified differently during the CREATE TABLE statement.
#default_table_type = MYISAM
default_table_type = Innodb

# Thread stack size to use. This amount of memory is always reserved at
# connection time. MySQL itself usually needs no more than 64K of
# memory, while if you use your own stack hungry UDF functions or your
# OS requires more stack for some operations, you might need to set this
# to a higher value.
thread_stack = 192K

# Set the default transaction isolation level. Levels available are:
# READ-UNCOMMITTED, READ-COMMITTED, REPEATABLE-READ, SERIALIZABLE
transaction_isolation = REPEATABLE-READ

# Maximum size for internal (in-memory) temporary tables. If a table
# grows larger than this value, it is automatically converted to disk
# based table This limitation is for a single table. There can be many
# of them.
tmp_table_size = 64M

# Enable binary logging. This is required for acting as a MASTER in a
# replication configuration. You also need the binary log if you need
# the ability to do point in time recovery from your latest backup.
log-bin=mysql-bin

# If you're using replication with chained slaves (A->B->C), you need to
# enable this option on server B. It enables logging of updates done by
# the slave thread into the slave's binary log.
#log_slave_updates

# Enable the full query log. Every query (even ones with incorrect
# syntax) that the server receives will be logged. This is useful for
# debugging, it is usually disabled in production use.
#log

# Print warnings to the error log file.  If you have any problem with
# MySQL you should enable logging of warnings and examine the error log
# for possible explanations.
#log_warnings

# Log slow queries. Slow queries are queries which take more than the
# amount of time defined in "long_query_time" or which do not use
# indexes well, if log_long_format is enabled. It is normally good idea
# to have this turned on if you frequently add new queries to the
# system.
log_slow_queries

# All queries taking more than this amount of time (in seconds) will be
# trated as slow. Do not use "1" as a value here, as this will result in
# even very fast queries being logged from time to time (as MySQL
# currently measures time with second accuracy only).
long_query_time = 2

# Log more information in the slow query log. Normally it is good to
# have this turned on. This will enable logging of queries that are not
# using indexes in addition to long running queries.
log_long_format

# The directory used by MySQL for storing temporary files. For example,
# it is used to perform disk based large sorts, as well as for internal
# and explicit temporary tables. It might be good to put it on a
# swapfs/tmpfs filesystem, if you do not create very large temporary
# files. Alternatively you can put it on dedicated disk. You can
# specify multiple paths here by separating them by ";" - they will then
# be used in a round-robin fashion.
#tmpdir = /tmp


# ***  Replication related settings


# Unique server identification number between 1 and 2^32-1. This value
# is required for both master and slave hosts. It defaults to 1 if
# "master-host" is not set, but will MySQL will not function as a master
# if it is omitted.
server-id = 1

# Replication Slave (comment out master section to use this)
#
# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
#    the syntax is:
#
#    CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>,
#    MASTER_USER=<user>, MASTER_PASSWORD=<password> ;
#
#    where you replace <host>, <user>, <password> by quoted strings and
#    <port> by the master's port number (3306 by default).
#
#    Example:
#
#    CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
#    MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method, then
#    start replication for the first time (even unsuccessfully, for example
#    if you mistyped the password in master-password and the slave fails to
#    connect), the slave will create a master.info file, and any later
#    changes in this file to the variable values below will be ignored and
#    overridden by the content of the master.info file, unless you shutdown
#    the slave server, delete master.info and restart the slaver server.
#    For that reason, you may want to leave the lines below untouched
#    (commented) and instead use CHANGE MASTER TO (see above)
#
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
#server-id = 2
#
# The replication master for this slave - required
#master-host = <hostname>
#
# The username the slave will use for authentication when connecting
# to the master - required
#master-user = <username>
#
# The password the slave will authenticate with when connecting to
# the master - required
#master-password = <password>
#
# The port the master is listening on.
# optional - defaults to 3306
#master-port = <port>

# Make the slave read-only. Only users with the SUPER privilege and the
# replication slave thread will be able to modify data on it. You can
# use this to ensure that no applications will accidently modify data on
# the slave instead of the master
#read_only


#*** MyISAM Specific options


# Size of the Key Buffer, used to cache index blocks for MyISAM tables.
# Do not set it larger than 30% of your available memory, as some memory
# is also required by the OS to cache rows. Even if you're not using
# MyISAM tables, you should still set it to 8-64M as it will also be
# used for internal temporary disk tables.
key_buffer_size = 32M

# Size of the buffer used for doing full table scans of MyISAM tables.
# Allocated per thread, if a full scan is needed.
read_buffer_size = 2M

# When reading rows in sorted order after a sort, the rows are read
# through this buffer to avoid disk seeks. You can improve ORDER BY
# performance a lot, if set this to a high value.
# Allocated per thread, when needed.
read_rnd_buffer_size = 16M

# MyISAM uses special tree-like cache to make bulk inserts (that is,
# INSERT ... SELECT, INSERT ... VALUES (...), (...), ..., and LOAD DATA
# INFILE) faster. This variable limits the size of the cache tree in
# bytes per thread. Setting it to 0 will disable this optimisation.  Do
# not set it larger than "key_buffer_size" for optimal performance.
# This buffer is allocated when a bulk insert is detected.
bulk_insert_buffer_size = 64M

# This buffer is allocated when MySQL needs to rebuild the index in
# REPAIR, OPTIMIZE, ALTER table statements as well as in LOAD DATA INFILE
# into an empty table. It is allocated per thread so be careful with
# large settings.
myisam_sort_buffer_size = 128M

# The maximum size of the temporary file MySQL is allowed to use while
# recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE.
# If the file-size would be bigger than this, the index will be created
# through the key cache (which is slower).
myisam_max_sort_file_size = 10G

# If the temporary file used for fast index creation would be bigger
# than using the key cache by the amount specified here, then prefer the
# key cache method.  This is mainly used to force long character keys in
# large tables to use the slower key cache method to create the index.
myisam_max_extra_sort_file_size = 10G

# If a table has more than one index, MyISAM can use more than one
# thread to repair them by sorting in parallel. This makes sense if you
# have multiple CPUs and plenty of memory.
myisam_repair_threads = 1

# Automatically check and repair not properly closed MyISAM tables.
myisam_recover


# *** BDB Specific options ***

# Use this option if you run a MySQL server with BDB support enabled but
# you do not plan to use it. This will save memory and may speed up some
# things.
skip-bdb


# *** INNODB Specific options ***

# Use this option if you have a MySQL server with InnoDB support enabled
# but you do not plan to use it. This will save memory and disk space
# and speed up some things.
#skip-innodb

# Additional memory pool that is used by InnoDB to store metadata
# information.  If InnoDB requires more memory for this purpose it will
# start to allocate it from the OS.  As this is fast enough on most
# recent operating systems, you normally do not need to change this
# value. SHOW INNODB STATUS will display the current amount used.
innodb_additional_mem_pool_size = 16M

# InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
# row data. The bigger you set this the less disk I/O is needed to
# access data in tables. On a dedicated database server you may set this
# parameter up to 80% of the machine physical memory size. Do not set it
# too large, though, because competition of the physical memory may
# cause paging in the operating system.  Note that on 32bit systems you
# might be limited to 2-3.5G of user level memory per process, so do not
# set it too high.
innodb_buffer_pool_size = 8G

# InnoDB stores data in one or more data files forming the tablespace.
# If you have a single logical drive for your data, a single
# autoextending file would be good enough. In other cases, a single file
# per device is often a good choice. You can configure InnoDB to use raw
# disk partitions as well - please refer to the manual for more info
# about this.
innodb_data_file_path = ibdata1:10M:autoextend

# Set this option if you would like the InnoDB tablespace files to be
# stored in another location. By default this is the MySQL datadir.
#innodb_data_home_dir = <directory>

# Number of IO threads to use for async IO operations. This value is
# hardcoded to 4 on Unix, but on Windows disk I/O may benefit from a
# larger number.
innodb_file_io_threads = 4

# If you run into InnoDB tablespace corruption, setting this to a nonzero
# value will likely help you to dump your tables. Start from value 1 and
# increase it until you're able to dump the table successfully.
#innodb_force_recovery=1

# Number of threads allowed inside the InnoDB kernel. The optimal value
# depends highly on the application, hardware as well as the OS
# scheduler properties. A too high value may lead to thread thrashing.
innodb_thread_concurrency = 16

# If set to 1, InnoDB will flush (fsync) the transaction logs to the
# disk at each commit, which offers full ACID behavior. If you are
# willing to compromise this safety, and you are running small
# transactions, you may set this to 0 or 2 to reduce disk I/O to the
# logs. Value 0 means that the log is only written to the log file and
# the log file flushed to disk approximately once per second. Value 2
# means the log is written to the log file at each commit, but the log
# file is only flushed to disk approximately once per second.
innodb_flush_log_at_trx_commit = 1

# Speed up InnoDB shutdown. This will disable InnoDB to do a full purge
# and insert buffer merge on shutdown. It may increase shutdown time a
# lot, but InnoDB will have to do it on the next startup instead.
#innodb_fast_shutdown

# The size of the buffer InnoDB uses for buffering log data. As soon as
# it is full, InnoDB will have to flush it to disk. As it is flushed
# once per second anyway, it does not make sense to have it very large
# (even with long transactions).
innodb_log_buffer_size = 8M

# Size of each log file in a log group. You should set the combined size
# of log files to about 25%-100% of your buffer pool size to avoid
# unneeded buffer pool flush activity on log file overwrite. However,
# note that a larger logfile size will increase the time needed for the
# recovery process.
innodb_log_file_size = 256M

# Total number of files in the log group. A value of 2-3 is usually good
# enough.
innodb_log_files_in_group = 3

# Location of the InnoDB log files. Default is the MySQL datadir. You
# may wish to point it to a dedicated hard drive or a RAID1 volume for
# improved performance
#innodb_log_group_home_dir

# Maximum allowed percentage of dirty pages in the InnoDB buffer pool.
# If it is reached, InnoDB will start flushing them out agressively to
# not run out of clean pages at all. This is a soft limit, not
# guaranteed to be held.
innodb_max_dirty_pages_pct = 90

# The flush method InnoDB will use for Log. The tablespace always uses
# doublewrite flush logic. The default value is "fdatasync", another
# option is "O_DSYNC".
#innodb_flush_method=O_DSYNC

# How long an InnoDB transaction should wait for a lock to be granted
# before being rolled back. InnoDB automatically detects transaction
# deadlocks in its own lock table and rolls back the transaction. If you
# use the LOCK TABLES command, or other transaction-safe storage engines
# than InnoDB in the same transaction, then a deadlock may arise which
# InnoDB cannot notice. In cases like this the timeout is useful to
# resolve the situation.
innodb_lock_wait_timeout = 120


[mysqldump]
# Do not buffer the whole result set in memory before writing it to
# file. Required for dumping very large tables
quick

max_allowed_packet = 16M

[mysql]
no-auto-rehash

# Only allow UPDATEs and DELETEs that use keys.
#safe-updates

[isamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M

[myisamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]
# Increase the amount of open files allowed per process. Warning: Make
# sure you have set the global system limit high enough! The high value
# is required for a large number of opened tables
open-files-limit = 8192

Thursday 15 December 2011

Postgresql on Ubuntu 10.10 cheatsheet

Here is a little cheat sheet for getting Postgress setup on Ubuntu and creating an initial database and dtabase user.


Install postgres and the python libraries:
 
sudo apt-get install postgresql-8.4 postgresql-client-8.4 python-psycopg2

Modify the config file to allow local connections:
 
sudo nano /etc/postgresql/8.4/main/pg_hba.conf

Add the line:
 
local     all         all     md5

Save the changes to the file and restart the server.
 
sudo /etc/init.d/postgresql restart

Set the password for the postgres user:
 
sudo passwd postgres

Change to the postgres user:
 
su - postgres

Create a new Database:
 
createdb mydb

Login to the postgres shell and point to our new database:
 
psql mydb

Now from the postgress shell create a user and give him access to the database:
 
mydb=> CREATE USER myuser WITH PASSWORD 'myPassword';
mydb=> GRANT ALL PRIVILEGES ON DATABASE mydb TO myuser;
mydb=\q

Done!

To dump a database:

su - postgres
pg_dumpall dbname > outfile
 
 
Note: this puts outfile in  /var/lib/postgresql


 source

Monday 28 November 2011

Drupal fix failed cron with drush

First run:

drush --yes vset cron_semaphore 0
And then

drush cron

php5-fpm and nginx config to avoid Internal server errors

This is a setting that I've found pretty stable (on a high-end server with 16GB ram) for a busy Drupal site, after a lot of headackes with 503 Error due to poor configuration:

nano /etc/php5/fpm/pool.d/www.conf

pm = dynamic

pm.max_children = 25
pm.min_spare_servers = 2
pm.max_spare_servers = 10

pm.max_requests = 100
 request_terminate_timeout = 30s



This may be also have been useful to put the following line in php.ini to make sure that we get rid of faulty long processes:

max_execution_time = 30

Impose the same time limit in the nginx  fastcgi directives:

nano /etc/nginx/sites-available/default:
            fastcgi_connect_timeout 30;

            fastcgi_send_timeout 30;

            fastcgi_read_timeout 30;

And finally, in /etc/php5/fpm/php.in, my memory limit is:


memory_limit = 256M


Don't forget to restart both nginx and php5-fpm for the changes to take effect.



Sunday 27 November 2011

reset MySQL password

Many suggested solutions did not work. But this one does the job

First You have to  Stop mysql server:
 
service mysqld stop

Now Start mysql server in safe mode with Follwing Options:
 


mysqld_safe –user=mysql –skip-grant-tables –skip-networking &

Now you have to Login to mysql server without password:
 


mysql -u root mysql

You will get Mysql Prompt.
 


mysql> UPDATE user SET Password=PASSWORD('newrootpassword') WHERE User='root';
 

mysql> flush privileges;
 

mysql> exit
 
Restart mysql server:
 


service mysqld restart

Login to MySQL With the New Password:
 

 mysql -uroot -pnewrootpassword

Thanks UnixLab

Monday 15 August 2011

io test of server

1 or 2U Rack space* 5000 GB bandwidth* 2 AMP / 110v power


dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
A samle taken now:

My Desktop result: 100MB/s
Dedi (under 7 load): 58.6MB/s
UK VPS (no load): 76.9 MB/s
Bursnet VPS: 118 MB/s


Wednesday 1 June 2011

Django deployment on Ubuntu 10.04 using nginx and uwsgi

I spend days to figure out how to do this. The problem is that the existing docs are somehow incompelte or too geeky.  So I doccument all the necessary steps that led to a successful deployment, for futre reference. Hopefully this help other beginner uwsgi deployers too!

I used a minimal Ubuntu 10.04 (32bit) so some of the apt-gets mentioned here may not be needed in a normal ubuntu installation.

Prepare the server
apt-get update
apt-get upgrade
apt-get install nano
apt-get install --reinstall language-pack-en
apt-get install libxml2-dev build-essential python-dev python-pip


To import ppa keys you need to:
apt-get install python-software-properties 
Since uwsgi is natively supported from nginx 0.8(?) onward, you need to use the latest nginx package (which is 1.xx) instead of the archaic nginx 0.7x still in debian packages. To install the latest nginx,

sudo su -
echo "deb http://ppa.launchpad.net/nginx/stable/ubuntu lucid main" >> /etc/apt/sources.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C300EE8C
apt-get update 
apt-get install nginx
Now install uwsgi package:


add-apt-repository  ppa:uwsgi/release 
apt-get update
apt-get -y install uwsgi-python



Here is how I organize directories:


/ /www
/dje /proj
/static

So let's install virtualenv and make dje (Django environment)

apt-get install python-virtualenv
mkdir /www
cd /www
virtualenv dje
cd /www/dje
source bin/activate 
  pip install django

Then make a new project called  proj

python /www/dje/bin/django-admin.py startproject proj

cd /www/dje/proj

Dont' forget doing usuasl settings.py stuff and syncdb! Then

mkdir static
nano deploy.py

In the the deploy.py paste:

 import os
import sys
from os.path import abspath, dirname, join
from site import addsitedir
sys.path.insert(0, abspath(join(dirname(__file__), "../")))
from django.conf import settings
os.environ["DJANGO_SETTINGS_MODULE"] = "proj.settings"
# sys.path.insert(0, join(settings.PROJECT_ROOT, "apps"))
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
You are done with the django side, now make uwsgi configs:

nano /etc/uwsgi-python/apps-available/django.xml 
 And paste in it:


<uwsgi>
    <socket>127.0.0.1:4000</socket>
    <pythonpath>/www/dje</pythonpath>
    <app mountpoint="/">
        <script>wsgihandler</script>
    </app>
</uwsgi>
Then, symlink it:

ln -s /etc/uwsgi-python/apps-available/django.xml /etc/uwsgi-python/apps-enabled/django.xml
Finally make the site's nginx config file:

nano /etc/nginx/sites-available/default

And paste in it:


upstream django {
server 127.0.0.1:4000;
}

server {
listen 80;
server_name mysite.com;

location / {
uwsgi_pass django;
include uwsgi_params;
uwsgi_param UWSGI_PYHOME /www/dje;
uwsgi_param UWSGI_SCRIPT deploy; #the name of deploy.py
# uwsgi_param SCRIPT_NAME django;
uwsgi_param UWSGI_CHDIR /www/dje/proj;
}
location ^~ /media/ {
root /www/dje/proj/static;

}
}

That's it just restart nginx and uwsgi and enjoy the combo:

service uwsgi-python restart
service nginx  restart
And to install mysql :

apt-get install mysql-server 
apt-get install python-mysqldb 
special thanks to Jason Wang and Web2py folks

Thursday 21 April 2011

How to complately remove php from Ubuntu server?

php_installed=`dpkg -l | grep php| awk '{print $2}' |tr "\n" " "`

# remove all php packge
sudo aptitude purge $php_installed

Saturday 9 April 2011

Pinax deployment with nginx and flup

After A LOT of searches failed attempts, at last I found this solution (kind of) working:

1. first install pinax, using the official guide:

and install the necessary packages
apt-get install python-flup nginx subversion python-mysqldb

2. Modify nginx conf
 
nano /etc/nginx/nginx.conf


user www-data;
worker_processes  1;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
    # multi_accept on;
}

http {
    include       /etc/nginx/mime.types;

    access_log /var/log/nginx/access.log;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";
    gzip_comp_level  6;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}


3. define  foo.com for nginx
nano /etc/nginx/sites-available/foo.com


server {
listen 80;
server_name www.foo.com foo.com;

if ($host != 'foo.com') {
rewrite ^/(.*)$ http://foo.com/$1 permanent;
}

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

location / {
fastcgi_pass 127.0.0.1:7718;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_pass_header Authorization;
fastcgi_intercept_errors off;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
}

location /robots.txt {
alias /www/foo/media/robots.txt;
}


location /site_media/ {
expires 7d;
alias /www/foo/media/;
}

location /media/ {
alias /www/env/lib/python2.6/site-packages/django/contrib/admin/media;
}
}


Boring Symlink:
 
ln -s /etc/nginx/sites-available/foo.com /etc/nginx/sites-enabled/foo.com


4. Start nginx and fcgi
sudo /etc/init.d/nginx restart
python /www/foo/manage.py runfcgi host=127.0.0.1 port=7718 pidfile=/www/foo/foocom.pid maxspare=2

Some useful commands:

If you don’t know where your Python site directory is:

python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"


Thanks goes to Thinking Critically

Saturday 19 March 2011

virtualenvwrapper (and install pinax)

First:

 apt-get install python-setuptools
easy_install pip
pip install virtualenv
pip install virtualenvwrapper
Then

nano ~/.bashrc

Now copy the required path


# virtualenvwrapper
export WORKON_HOME=~/.virtualenvs
source /usr/bin/virtualenvwrapper.sh
export PIP_VIRTUALENV_BASE=$WORKON_HOME
export PIP_RESPECT_VIRTUALENV=true



source ~/.bashrc

mkdir ~/.virtualenvs

Usage examples:

mkvirtualenv --no-site-packages myenv

workon myenv

Pinax
Then you can cd to favorite directory and install pinax:

pip install Pinax
And replicate a project:

 pinax-admin setup_project -b social myproj





 Thansk:
http://blog.sidmitra.com/manage-multiple-projects-better-with-virtuale

Tuesday 1 March 2011

Mysql database: Migrate and Repair

To repair, just issue this command: 
mysqlcheck -uroot -pxxxxx --auto-repair --optimize --databases your_db
Man page

Migrate mysql using file 

Usually, it is less-time consuming to migrate a database using file, instead of using mysqldump to make a *.sql file, and then restore the database. Sometimes, like when the *.sql file is not available, it is the only way to re-alleviate a website.

So to migrate the file directly:

1)Stop MySQL server; 
2) if necessary, rename the folder containing the individual table files to whatever you want the database to be named (e.g. 'drupal'); 
3) copy the folder directly into /var/lib/mysql
4) set permissions on the folder: as root, run 'chown -R mysql /var/lib/mysql/*'; 
5) restart MySQL server; 
6) If you've used a new mysql user, need to update the settings.php accordingly

Thursday 24 February 2011

MYSQLTuner

MySQLTuner  is a perl script for mysql optimization.

To get uset it:


  wget http://mysqltuner.com/mysqltuner.pl
  chmod +x mysqltuner.pl
  ./mysqltuner.pl

I acheived amazing performance boost (server load dropped form 10 to 1) following the suggestions of this script, i.e.:


General recommendations:
    Add skip-innodb to MySQL configuration to disable InnoDB
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Enable the slow query log to troubleshoot bad queries
    Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
    query_cache_size (> 64M)
    table_cache (> 64)






Thanks mediakey.dk for introducing this amazing script.

nsd3+nginx+php-fpm+drupal

Install nsd3 and nginx

touch /etc/nsd3/nsd.conf
apt-get install nsd3
apt-get install nginx
Check these to figure out how to set up php5-fpm : (in case the instructions in the first link not works, install fpm from source, as describe in howtoforge )

http://gerardmcgarry.com/blog/how-install-php-fpm-nginx-ubuntu-1004-server
http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian

add an owner user
mkdir -p /srv/mysite
adduser rootuser
usermod -G www-data rootuser


apt-get install mysql-client mysql-server php5-mysql php5-imagick php5-gd


Create database and set permissions

mysql -u root -p

CREATE DATABASE mysitedb;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON mysitedb.* TO 'mysiteuser'@'localhost' IDENTIFIED BY 'password#';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON mysitedb.* TO 'mysiteuser'@'localhost.localdomain' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
quit;


Install drupal
cd /srv/mysite
wget http://ftp.drupal.org/files/projects/drupal-6.20.tar.gz
tar zxvf drupal-6.20.tar.gz
mv drupal-6.20/* .
rm -r drupal-6.20 drupal-6.20.tar.gz


set up permissions
cd sites/default/
cp default.settings.php settings.php
chown www-data:www-data settings.php
chmod 775 settings.php
mkdir files
chown www-data:www-data files
chmod 775 files

Tuesday 22 February 2011

memcached on drupal 6

environment: ubuntu hardy (8.04), drupal 6.13

memcached

To install memcached, you need to install libevent first:
sudo apt-get install libevent-dev
Install memcached:
mkdir src
cd src
wget http://memcached.googlecode.com/files/memcached-1.4.0.tar.gz
tar xzvf memcached-1.4.0.tar.gz
cd memcached-1.4.0
./configure
make
sudo make install
cd ..
Create control script:
sudo nano /usr/local/bin/memcache.sh
Add the following code:
#!/bin/sh
case "$1" in
start) /usr/local/bin/memcached -d -u root -m 240  -p 11211
;;
stop)  killall memcached
;;
esac
240 is the memory limit for the instance of memcached, the unit is MB.
11211 is the port number.
make it executable :
sudo chmod +x /usr/local/bin/memcache.sh 
start a memcached instance when the server startup:
sudo nano /etc/rc.local
add:
/usr/local/bin/memcache.sh start
start a memcached instance by running:
/usr/local/bin/memcache.sh start

PECL memcache extension for PHP

install php-pear if you have not installed it yet
apt-get install php-pear
install PECL memcache :
pecl install Memcache 
Edit the php.ini
nano/etc/php5/fpm  php.ini file:
add "extension=memcache.so" to it.

Restart nginx and php5-fmp


Memcache API and Integration module

open settings.php of your drupal site ( /sites/default/settings.php ), and add the following to the end of the file :
$conf = array(
   'cache_inc' => './sites/all/modules/memcache/memcache.inc',
 );
note: you may place './sites/all/modules/memcache/memcache.inc' with 'cache_inc' => './sites/all/modules/memcache/memcache.db.inc' to cache data both to memory and database if your memcache's memory limit is small or the memcached instance go offline often. (See the README.txt of Memcache API and Integration for more details. )
download Memcache API and Integration module from http://drupal.org/project/memcache, install and enable it.
Now, the integration of memcached and your drupal site is done. You can view memcache status from /admin/reports/memcache .

source

Sunday 13 February 2011

set up postfix to send mails to google apps

After hours of search and trying several different solution, I found that it is surprisingly simle:

apt-get install postfix
nano /etc/postfix/main.cnf


change the following

mydestination = mydomain.com, localhost.mydomain.com, localhost

to

mydestination = localhost.mydomain.com, localhost
Reboot the server. Done!

Thanks Gyaan Sutra

Saturday 29 January 2011

Install python 2.5 on ubuntu 10.04

So, you're a Python developer and like to use the 2.5.x track instead of the 2.6.x or the 3.x track. Well, never fear! Despite the fact that 2.5.5 is not installed in 10.04, or available in the repositories, you can still install it into your system. The following steps will show you how.
Open your terminal and type the following commands line by line:
sudo apt-get install build-essential gcc
cd Downloads
wget http://www.python.org/ftp/python/2.5.5/Python-2.5.5.tgz
tar -xvzf Python-2.5.5.tgz
cd Python-2.5.5
./configure --prefix=/usr/local/python2.5
make
make test
sudo make install
sudo ln -s /usr/local/python2.5/bin/python /usr/bin/python2.5
There you have it! Python 2.5.5 is installed. Now if you want to run it, you can type python2.5 from the terminal.


Source: Welcome to Ubuntu


Friday 28 January 2011

Install virtualenv and django, no-nonesense way

Install virtualenv

Installing virtualenv is easy on a Linux or Mac system, but the instructions that follow are Linux (Ubuntu, actually) specific. First you’ll need setuptools:

sudo apt-get install python-setuptools

Then we can easy_install virtualenv:
sudo easy_install virtualenv
We need to use sudo here because it has to install to a global location. Don’t worry, this is the last time we’ll need to do something as root.

Create your virtualenv

cd to wherever it is you keep your projects (for me, in ~/src), and run:
virtualenv --no-site-packages venv
In this instance I’ve chosen venv as the name for my virtual environment. The —no-site-packages command tells virtualenv not to symlink the global site packages into my local environment, just take the Python standard library. This is important, because it helps us avoid the dependency difficulties mentioned above.
At this stage you might want to add venv to your list of ignored files, as you don’t want it to be committed to source control:
echo "venv" >> .gitignore

Installing Django

Now, the trick with virtualenv is that it creates its own Python and easy_install binaries, which means you can install/run things specifically in your environment. Let’s install Django:
./venv/bin/easy_install django
And it’s done. easy. You might also want to install the MySQL bindings and IPython for ease of use:
./venv/bin/easy_install ipython python-mysql
To start a new Django project, you’ll note that a django-admin.py file will have been installed for you in the environment:
./venv/bin/django-admin.py startproject myapp
Obviously you can skip this step if you have an existing Django project.

Running Django

Now the last step, which is probably obvious by now, is to run Django’s runserver with the virtual Python binary:
cd myapp
../venv/bin/python manage.py runserver 0.0.0.0:8000
And you’re away!


Source: Bradley Wright