Category: Cloud-IaaS

Gartner’s Magic Quadrant for Cloud Infrastructure as a Service

 
An interesting comparison of IaaS products.

No tags Hide

Our team has attended the global conference: IBM Impact 2013. There was plenty of great insight and information regarding Cloud Computing technologies during the conference, including PureFlex, PureApplication and PureData.

Here are some interesting session presentations:

•    Future Directions in Cloud Platforms PDF
•    WebSphere Application Infrastructure The Big Picture PDF
•    Continuous Delivery to Your Private PaaS: The Next Era of Private Cloud PDF
•    SOA as the Foundation for Cloud Adoption PDF
•    Next-Gen Productivity Platform for IBM PureSystems, Web and Mobile PDF
•    Exploring IBM PureApplication System Patterns of Expertise PDF
•    Exploring IBM PureApplication System and IBM Workload Deployer Best Practices PDF
•    Introduction to Cast Iron PDF
•    Best Practices for Cast Iron Integration both in the Cloud and On Premise PDF
•    WebSphere in a Virtual Cloudy World PDF
•    Deploying High-Availability Patterns in PureApplication System PDF
•    Why Would I Want to Put My Database in the Cloud? PDF
•    Driving Continuous Delivery with IBM PureApplication Systems PDF

No tags Hide

Cisco Workshop

We attended the workshop at Cisco European headquarters in Bedfont Lakes, Feltham (London, United Kingdom). The purpose of this visit was to evaluate Cisco UCS (Unified Computing System), CIAC (Cisco Intelligent Automation for Cloud), Cisco Nexus 1000v Series Switches and other Cisco technologies for the purpose of IaaS cloud development. We have done a deep dive into Cisco Infrastructure as a Service (IaaS) products, such as Cisco Cloud Portal, Cisco Process Orchestrator, Cisco Network Services Manager and Cisco UCS Manager. The visit also included tour of Cisco’s Customer Proof-of-Concept Lab (CPOC).

www.cisco.com

img_0064 img_0111 img_0196 img_0097
img_0082 img_0162 img_0169 img_0155

, , , Hide

Hewlett-Packard will offer a Cloud Computing service similar to Amazon Web Services. This should happen in about two months. HP Cloud will be focused more on enterprise users and will offer personalized services. The cloud should support Java, PHP, Phyton, and Ruby. The highlight HCG Diet will be an online store for selling, renting and purchasing applications. Another difference will be that HP will several install small data centres around the globe. This differs from Amazon’s, Google’s and Microsoft’s approach with few large data centres.

More info: http://hpcloud.com/

http://bits.blogs.nytimes.com/2012/03/09/first-look-hps-public-cloud/

No tags Hide

In April, 2012, IBM Cloud Academy Conference will take place in RTP North Carolina, U.S.

Matjaz B. Juric is Program Committee member.

More information: http://www.ibm.com/solutions/education/cloudacademy/us/en/cloud_academy_conference.html

 

No tags Hide

We will present at the Open and Secure Cloud Computing – Workshop.

The cloud computing workshop: Open and secure cloud computing will be held in Technology Park Ljubljana on 14th December 2011, 10am – 3pm. The main purpose of this one day conference will be to share knowledge and experiences leveraging open-source IaaS solution: OpenStack. The opening lecture will be given by several distinguished guests from abroad: OpenStack Community Manager, Stefano Muffulli, the CEO of O’Reilly Media, Tim O’Reilly and Justin Santa Barbara from FathomDB.

You are welcome to attend the seminary and take advantage of this unique opportunity to get familiar with the leading IT trends and experiences. Submit here.

Agenda in Slovenian language is attached below.

Agenda

10:00 – 10:45
OpenStack – od kod in kam

Projekt OpenStack, v katerem že drugo leto nastaja programska oprema, ki uporabniku omogoča postavitev javnega ali zasebnega “oblaka” na standardni strojni opremi sta ustanovila NASA in Rackspace Hosting. Pridružila so se številna znana imena iz sveta IT: Cisco, Dell, Intel, Citrix, NetApp, F5, AMD, Hewlett Packard… skupaj preko 140 podjetij in preko 1.600 posameznikov, ki vlagajo napore v razvoj OpenStack platforme za postavitev in upravljanje “oblakov”.

Stefano Maffulli, OpenStack Community Manager, www.openstack.org/community
g. Stefano Maffulli je globalni koordinator tega obsežnega projekta. Po delu na področju formiranja Free Software Foundation Europe je delal kot “community manager” za vodilni odprtokodni sistem za sinhronizacijo med mobilnimi napravami “Funambol” ter uveljavitev Twitter-ja v Italiji. Sedaj živi in dela v San Franciscu.

Če je računalništvo v oblaku prihodnost računalništva, potem je razumevanje, kako narediti prihodnost odprto eden osrednjih tehnoloških izzivov današnjega dne. Projekt OpenStack dela velike korake proti viziji odprtega oblaka.

Tim O’Reilly, CEO of O’Reilly Media, Inc., www.oreilly.com

OpenStack bo seme številnih oblakov – javnih in zasebnih, ki bodo temeljili na enem, odprtem standardu.

Justin Santa Barbara, FathomDB


10:45 – 11:15
Pregled odprtokodnih “IaaS” rešitev in praktična demonstracija OpenStack ogrodja

Na področju “infrastrukture kot storitve” (IaaS) so trenutno prisotne tako komercialne, kot tudi odprtokodne reitve. V industriji se vedno bolj uveljavljajo prav odprtokodni produkti, ki postajajo steber informacijske podpore v tevilnih podjetij in organizacijah. Na predstavitvi bo podan pregled najpomembnejih odprtokodnih rešitev, kot so OpenNebula, Eucalyptus, Nimbus in OpenStack. Predstavljeni bodo arhitekturni gradniki posameznih produktov ter njihove ključne funkcionalnosti, na podlagi katerih bo podana primerjava produktov. V zadnjem delu bo sledila praktična demonstracija OpenStack ogrodja.

Robert Dukarić, uni. dipl. ing., XLAB d.o.o., www.xlab.si
dr. Matjaž B. Jurič, Laboratorij za integracijo informacijskih sistemov, Fakulteta za računalništvo in informatiko (FRI), www.fri.uni-lj.si


11:15 – 11:45
Varnost tudi v oblaku

Združenje “Cloud Security Alliance” navaja sedem glavnih groženj, ki po mnenju strokovnjakov pretijo organizacijam, ki so svoje poslovanje preselile v javni oblak s storitvenimi modeli IaaS, PaaS ali SaaS. Kjer je relevantno, navaja tudi primere incidentov, povsod pa vsaj smernice za njihovo preprečevanje. V drugem delu predavanja je predstavljena nova paradigmo “Security as a Service” – deset področij, na katerih imajo ponudniki današnjih oblakov dovolj prostora za nove storitve, ki jih najemniki oblaka danes močno pogrešajo in ki bodo gotovo omilile osnovno nezaupanje, ki ga prinaša izguba kontrole, povezana s prehodom v oblak.

Dr. Mojca Ciglarič, docentka in vodja Laboratorija za računalniške komunikacije na ljubljanski Fakulteti za računalništvo in informatiko. Je članica “Cloud Security” Alliance in ima vlogo raziskovalne direktorice v slovenski sekciji združenja. www.fri.uni-lj.si


11:45 – 12:00
Varnost v OpenStack

V predstavitvi bo podan pregled varnostnih mehanizmov, ki so vključeni v trenutno različico OpenStack, in predstavljena priporočila za varno uporabo OpenStack.

Primož Cigoj, dipl. ing. rač., Laboratorij za odprte sisteme in mreže (E5), Institut “Jožef Stefan”, www.e5.ijs.si


12:00 – 12:15
KC Class

Predstavitev dejavnosti kompetenčnega centra za računalništvo v oblaku KC Class.

Dalibor Baškovč, www.KC-Class.eu


12:15 – 12:30
Odmor


12:30 – 13:00
Hranjenje podatkov v OpenStack

1. del: Izkušnje z namestitvijo Open Stack Storage (OpenStack Storage installation)

mag. Ivan Tomašič, Elektrotehniška fakulteta v Zagrebu

2. del: Povezava OpenStack Storage z AmazonS3 (OpenStack Storage and AmazonS3)

Aleksandra Rashkovska, dipl. ing. rač., Mednarodna podiplomska šola IJS, odsek za komunikacijske sisteme (E6), Institut “Jožef Stefan”, www-e6.ijs.si


13:00 – 14:00
Kako z uporabo cenovno dostopnih orodij zgradimo visokozmogljiv sistem za hranjenje podatkov

S programsko opremo ZFS in Nexenta je možno zgraditi zanesljiv in zmogljiv sistem za hranjenje podatkov. Poleg zmogljivosti predstavitev prikaže kakšne so izkušnje s temi sistemi.

dr. Matjaž Pančur in Andrej Krevl, dipl. ing. rač., Laboratorij za računalniške komunikacije, Fakulteta za računalništvo in informatiko (FRI), www.fri.uni-lj.si


14:00 – 14:30
Razvoj za oblake v RedHat

Java EE razvoj za oblake: postavitev aplikacijskega strežnika JBoss v okolje OpenShift.

Aleš Justin, JBoss by RedHat


14:30 – 15:00
Predstavitev strojne opreme

, , Hide

OpenNebula is an open-source cloud computing framework for building private, public and hybrid cloud environments. Its goal is to provide an open, flexible and extensible management layer to automate and orchestrate the operations of existing (on-premises) or remote hardware infrastructure including networking, storage, virtualization, monitoring and user management. OpenNebula also support a mechanism called “hooks”; triggering of custom scripts, tied to the state change of a particular resource. Hooks can be a powerful feature, as they open a wide area of possibilities for system and process automation.

Hooks can be trigger by a state change in either Virtual Machine or Host. For Virtual Machine state changes, the hook script can be executed on the head node (OpenNebula cluster controller), or on the scheduled host directly. The hooks mechanism is available “out of the box”, so no additional installations or settings are required – apart from the hooks themselves, of course. To demonstrate the usage of hooks when extending the base OpenNebula system with our specific business and process flows, we included a simple example. The example is very easy to understand, yet not completely trivial and could also be used in a real-world scenario.

Let’s say we are the administrators of an OpenNebula cloud, which can be fully utilized by our client’s IT staff. The IT staff has full control over the virtual machines, but we’d still like to be informed when a VM is up & running, particularly about the VM’s owner and the host machine on which the VM is currently running. To achieve this, we will send an e-mail to a predefined address when any of the virtual machines enter the “running” state, along with the required VM’s runtime information.

Great! Now let’s get our hands dirty… The example system’s architecture is as follows:

  • we will use the OpenNebula’s hooks mechanism to trigger a Ruby script when the state of a VM changes to RUNNING
  • the Ruby script will pass the message containing the Virtual Machine’s ID to a server socket
  • the Java socket server will, upon receiving a valid message, trigger the execution of our business logic
  • the business logic will use OpenNebula’s Java RPC API to connect to the cloud, retrieve the VM’s runtime information and send an e-mail to a predefined address

1) We first have to define a new Virtual Machine hook for the “running” state. Open /etc/one/oned.conf or $ONE_LOCATION/etc/oned.conf (depending on your installation type). This is your Hook Manager’s configuration file. Add the following lines at the end of file:

VM_HOOK = [
  name = "demo_vmhook_running",
  on = "RUNNING",
  command = "demohook.rb",
  arguments = "VM RUNNING $VMID",
  remote = "no" ]

A little explanation won’t hurt:
“name” is the name for the hook and can be anything, but it is useful to provide a descriptive enough name in case something goes wrong with the script or the hook itself – the name parameter will be displayed in the logs.
“on” means the state this hook is bound to, in our case running. Other states include create, shutdown, stop, etc. For the complete list, please consult the documentation.
“command” is the script file that gets executed when the hook is triggered. We use a Ruby script “demohook.rb” since Ruby is automatically installed with OpenNebula and quite easy to read.
“arguments” is probably the most important part of hook’s definition, because we can access VM template variables with $ sign. Hence, $VMID means the ID of the Virtual Machine that just entered the RUNNING state.
“remote” is currently set to “no”, because we generic cialis online want the “demohook.rb” script to be executed on the head-node, where our Java program is running. By setting this to “yes”, the script is executed remotely (on the host where the VM was scheduled to run), which can also be quite a powerful feature of OpenNebula.

2) Create the Ruby script demohook.rb and place it in /usr/share/one/hooks or $ONE_LOCATION/share/hooks, depending on your installation type:

#!/usr/bin/env ruby
require 'socket'

begin
  if(!ARGV.at(2))
    puts("3 arguments required")
  else
    sck = TCPSocket.new("127.0.0.1", 3344)
    if(sck)
      sck.write(ARGV[0] + "_" + ARGV[1] + "_" + ARGV[2])
      sck.close
    end
  end
rescue Errno::ECONNREFUSED
  p 'TCP socket connection refused on port 3344 - is Java socket server running?'
end

The script is also available in the source zip file. You can change the port number (3344), but please make sure you also change the port in Java program accordingly (file HooksListener.java).

3) Restart OpenNebula by issuing the following command:
$ sudo service opennebula restart

OK, we have just installed the OpenNebula hook along with the script, which will just pass the arguments to a server socket on port 3344. Now we need the socket listener to trigger the execution of our business logic (get VM’s runtime information and send it via e-mail to the system administrator). We will use Java programming language to utilize the Java RPC API and take full control of the rest of the process. This approach allows us to keep the Ruby script as simple as possible and free of any specific business logic, as Java code is usually easier to maintain and extend. We could, of course, limit ourselves to Ruby only and put everything in “demohook.rb” script. It doesn’t even have to be Ruby, it could also be Python, plain old shell script or maybe even PHP. But to demonstrate this example better, the socket connection between Ruby and Java seemed like a good idea. OK, let’s take care of the last part…

4) Download and extract this zip file on your OpenNebula head node, preferably inside your home directory. We recommend you put all the files in opennebula_hooks_demo subdirectory, apart from the “demohooks.rb” script, which you should put as per #2 above (if not created already). Now open the build.xml file and change the “basedir” property (line #1) to whatever your home folder is:
<project name="OpenNebulaHooksDemo" basedir="/YOUR_HOME_FOLDER/opennebula_hooks_demo" default="main">

5) We’re almost done, we just need to change of couple of settings:
Open the file /YOUR_HOME_FOLDER/opennebula_hooks_demo/src/si/cloud/opennebula/MailSender.java and specify your e-mail server, username, password and other information required by javax.mail transport:
private static final String MAIL_HOST = "smtp.gmail.com";
private static final int MAIL_PORT = 465;
private static final String MAIL_USERNAME = "[email protected]";
private static final String MAIL_PASSWORD = "mypassword";
private static final String MAIL_FROM = "[email protected]";
private static final String MAIL_TO = "[email protected]";

To retrieve VM’s runtime information from OpenNebula, we use Java RPC API. Full API documentation can be found here, but for this example to work, you just need to double-check the basic connection settings. Open the file /YOUR_HOME_FOLDER/opennebula_hooks_demo/src/si/cloud/opennebula/OpenNebula.java and change the settings accordingly.
private static final String ONE_RPC_HOST = "localhost";
private static final String ONE_RPC_PORT = "2633";
private static final String ONE_ADMIN_USERNAME = "oneadmin";
private static final String ONE_ADMIN_PASSWORD = "oneadmin";

6) Make sure Java 6 (or above) and Apache Ant are installed on your head node. You can run the example Java program by issuing the ant command in the project’s base directory, e.g. /YOUR_HOME_FOLDER/opennebula_hooks_demo (which should also be the directory containing Ant’s build.xml file).

If you encounter any problems setting up the system, please feel free to send me an e-mail or post a comment below.

, , , , Hide

Eucalyptus enables the creation of on-premise private clouds via Amazon’s EC2 and S3 compatible API. However, the popular open-source version does not provide any in-built monitoring but relies on integration with proven monitoring tools, such as Nagios and Ganglia. For both of these tools Eucalyptus source package includes shell scripts (directory Extras), which modify Nagios and Ganglia configuration files to enable Eucalyptus-specific monitoring on predefined number of hosts. Below is a detailed installation procedure for “Debian Squeeze”, along with a modified script nagios.sh for full compatibility with Nagios v. 3.2.3 (the script provided in the Eucalyptus source package produces errors and warnings).

Nagios

1) Install Nagios
On the cluster/cloud controller:
wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.3.tar.gz
tar xvzf nagios-3.2.3.tar.gz
cd nagios-3.2.3/
addgroup nagios
useradd nagios -g nagios
./configure --with-command-group=nagcmd
make all
make install
make install-config
make install-init
make install-webconf
make install-commandmode

2) Configure Nagios plugins
sudo apt-get install apache2
sudo apt-get install php5
sudo apt-get install libgd2-xpm-dev
groupadd nagcmd
usermod -a -G nagcmd nagios
usermod -a -G nagcmd www-data
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin (set password for web console access)

cd /root
wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.15.tar.gz
tar xvzf nagios-plugins-1.4.15.tar.gz
cd nagios-plugins-1.4.15/
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make
make install

3) Configure Nagios to automatically start when the system boots
ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Verify the sample Nagios configuration files:
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Should output
Total Warnings: 0
Total Errors: 0

If there are no errors, start Nagios.
/etc/init.d/nagios start

If you encouter a problem with saving configuration through the web console (insufficient permissions for directory /usr/local/nagios/var/rw), try:
chown nagios.www-data /usr/local/nagios/var/rw

4) Integrate Nagios with Eucalyptus
There was a problem with the original nagios.sh script (from Eucalyptus source) and Nagios 3.2.3. Pre-flight checks produced multiple errors and warnings, so the script was modified to conform to the latest Nagios config file definitions:

  • check_load command is undefined in /usr/local/nagios/etc/objects/commands.cfg, use check_local_load instead
  • add all the required parameters to service definitions (warnings in pre-flight checks)
  • add all the required parameters to hosts definitions (linux-server), so the hosts don’t stay in PENDING state (use linux-server)
  • define new service (eucalyptus-service) to be injected in the output file so the params can be easily synchronized with cron job definition (check interval)

The updated script can be found here. Make sure to double-check the path to the Nagios pipe Casino En Ligne (default: /usr/local/nagios/var/rw).

Create Eucalyptus configuration file for Nagios:
/root/nagios.sh -setup -nodes "hostnameNC" -cc "hostnameCC" -cloud "hostnameCloud" -walrus "hostnameWalrus" > /usr/local/nagios/etc/objects/eucalyptus.cfg
(replace hostnameX with hostnames of computers running these services)

Open /usr/local/nagios/etc/nagios.cfg, add
# Eucalyptus
cfg_file=/usr/local/nagios/etc/objects/eucalyptus.cfg

AFTER the line
# Definitions for monitoring a network printer
#cfg_file=/usr/local/nagios/etc/objects/printer.cfg

Issue pre-flight checks:
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Should output
Total Warnings: 0
Total Errors: 0

Restart Nagios
/etc/init.d/nagios restart

Add the script, which updates the services statuses, to cron:
crontab -e, add:
*/5 * * * * /root/nagios.sh -nodes "hostnameNC" -cc "hostnameCC" -cloud "hostnameCloud" -walrus "hostnameWalrus"

Finally, open http://CC_IP/nagios

Note: make sure active checks are disabled for Eucalyptus services. Active checks rely on Nagios’s internal mechanisms to determine the state of the service, whereas the current script periodically checks the state of Eucalyptus services and publishes the results in Nagios pipe file.

Ganglia

1) Install Ganglia
On the cluster/cloud controller
apt-get install rrdtool librrds-perl librrd2-dev libdbi0-dev libapr1-dev libconfuse-dev php5 php5-gd
apt-get install ganglia-monitor gmetad

To use the web interface, Ganglia also need to be compiled from source. Prerequisites: PCRE
wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.12.zip
unzip pcre-8.12.zip
cd pcre-8.12
./configure
make
make install
cd ..

… or install PCRE with:

apt-get install libpcre3 libpcre3-dev

Install Ganglia from source
wget http://downloads.sourceforge.net/project/ganglia/ganglia%20monitoring%20core/3.2.0/ganglia-3.2.0.tar.gz
tar xvzf ganglia-3.2.0.tar.gz
cd ganglia-3.2.0/
./configure --prefix=/opt/ganglia --enable-gexec --with-gmetad
ln -s /lib/libpcre.so.3 /lib/libpcre.so.0
make
make install

2) Enable Web console
mkdir /var/www/ganglia
cp -R web/* /var/www/ganglia
mkdir /var/lib/ganglia/dwoo
chown www-data.www-data /var/lib/ganglia/dwoo
chmod 775 /var/lib/ganglia/dwoo

3) Install the monitoring service on each Node controller
apt-get install ganglia-monitor

4) Start Ganglia at system boot
On the CC/CLC controller, open /etc/rc.local, add:
/etc/init.d/gmetad start
/etc/init.d/ganglia-monitor start

On each NODE contoller, open /etc/rc.local, add:
/etc/init.d/ganglia-monitor start

5) Integrate with Eucalyptus (script ganglia.sh)

  1. Copy ganglia.sh file to each host runing ganglia-monitor
  2. Apply exec permissions: chmod +x ganglia.sh
  3. Run the script…

On the Cloud controller:
Monitor Walrus buckets: ./ganglia.sh -type walrus -d /
Monitor EBS volumes: ./ganglia.sh -type sc -d /

On each Node controller:
Monitor VM resources: ./ganglia.sh -type nc -d /

Restart the Ganglia monitor
/etc/init.d/ganglia-monitor restart

Restart the Meta Daemon on Head Node for changes to become visible in the web console:
/etc/init.d/gmetad restart

If you encouter the following error:
Configuration file '/usr/local/etc/gmond.conf' not found.
Unable to create ganglia send channels. Exiting.

Create the following symbolic link:
ln -s /etc/ganglia/gmond.conf /usr/local/etc/gmond.conf

Finally, open: http://CC_IP/ganglia

, , Hide

Early this year Amazon introduced a new service, Elastic Beanstalk. It enables quick Java application deployment and management without having to worry about the infrastructure that runs these applications. When application is created you simply upload new application version. After the upload, Elastic Beanstalk will automatically create and configure AWS resources and services such as Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (Amazon S3), Amazon Simple Notification Service (Amazon SNS), Amazon CloudWatch, Elastic Load Balancing and Auto Scaling in order to run the application. These automatically configured resources to run the application version can also be called environment. Although the environment is configured automatically, you still retain national cash advance payday loans full control of resources representing it.

On June 30th Amazon announced two new capabilities of Elastic Beanstalk. Now it is possible to save an environment configuration and launch new environments with a saved configuration, or apply a saved configuration to an existing configuration. This feature makes it easy to launch multiple environments with preferred settings. The second capability allows you to swap URLs between environments. This is especially useful for staging new application versions. It enables you to create a new environment for new application version and when it is ready for production, you simply swap URLs. This way user does not experience any downtime when upgrading applications.

, , Hide

According to the recent survey by Ipanema Technologies and Orange Business Services the majority of enterprises will move to hybrid cloud in the next four years. The study of 150 enterprise CIOs and IT Directors found that 66% of them plan to use the hybrid cloud delivery model in the years to come.

The primary driver for adopting the cloud model seems to be cost reduction, which can reach significant numbers. Moving (some) IT services to the cloud and consolidating over designed server infrastructure lowers operating expenditure (OPEX) arising from power, cooling, maintenance and other costs. As for the SME sector, adopting the cloud model early on can significantly lower capital expenditure (CAPEX), which allows start-ups to optimize their initial (seed) funding, achieving a faster time to market (TTM).

(more…)

, , Hide

Older posts >>