Tag: Cloud Computing (CC)

European Science Cloud

Because of the increasing concerns over America’s Patriot Act, CERN (European Organization for Nuclear Research), ESA (European Space Agency) and EMBL (European Molecular Biology Laboratory) are going to launch a massive cloud computing project called Helix Nebula – a science cloud named after a large planetary nebula in the Aquarius constellation.

The goal of this European project (with a two-year pilot phase) is to become a mainstream cloud for scientists by the year 2020. It will contain scads of data, with open source tools and an “infinite amount of computing power” accessible from any device, anywhere. The project will include all the major companies in the Europe, including Atos, Capgemini, CloudSigma, Orange Business Services, SAP and two Telco companies: Telefonica and T-Systems. Some well-known organizations are also going to participate, such as CSA (Cloud Security Alliance), OpenNebula Project (founded as an European FP7 project) and the EGI (European Grid Infrastructure).


Cloud computing workshop (KC CLASS)

The cloud computing workshop for KC Class (Cloud Assisted ServiceS) is taking place today at the Chamber of Commerce and Industry effects of increased dose of cialis of Slovenia. The presentation slides can be downloaded using link below:


Unfortunately, the slides are available only in Slovenian language.

, , , Hide

OpenNebula is an open-source cloud computing framework for building private, public and hybrid cloud environments. Its goal is to provide an open, flexible and extensible management layer to automate and orchestrate the operations of existing (on-premises) or remote hardware infrastructure including networking, storage, virtualization, monitoring and user management. OpenNebula also support a mechanism called “hooks”; triggering of custom scripts, tied to the state change of a particular resource. Hooks can be a powerful feature, as they open a wide area of possibilities for system and process automation.

Hooks can be trigger by a state change in either Virtual Machine or Host. For Virtual Machine state changes, the hook script can be executed on the head node (OpenNebula cluster controller), or on the scheduled host directly. The hooks mechanism is available “out of the box”, so no additional installations or settings are required – apart from the hooks themselves, of course. To demonstrate the usage of hooks when extending the base OpenNebula system with our specific business and process flows, we included a simple example. The example is very easy to understand, yet not completely trivial and could also be used in a real-world scenario.

Let’s say we are the administrators of an OpenNebula cloud, which can be fully utilized by our client’s IT staff. The IT staff has full control over the virtual machines, but we’d still like to be informed when a VM is up & running, particularly about the VM’s owner and the host machine on which the VM is currently running. To achieve this, we will send an e-mail to a predefined address when any of the virtual machines enter the “running” state, along with the required VM’s runtime information.

Great! Now let’s get our hands dirty… The example system’s architecture is as follows:

  • we will use the OpenNebula’s hooks mechanism to trigger a Ruby script when the state of a VM changes to RUNNING
  • the Ruby script will pass the message containing the Virtual Machine’s ID to a server socket
  • the Java socket server will, upon receiving a valid message, trigger the execution of our business logic
  • the business logic will use OpenNebula’s Java RPC API to connect to the cloud, retrieve the VM’s runtime information and send an e-mail to a predefined address

1) We first have to define a new Virtual Machine hook for the “running” state. Open /etc/one/oned.conf or $ONE_LOCATION/etc/oned.conf (depending on your installation type). This is your Hook Manager’s configuration file. Add the following lines at the end of file:

  name = "demo_vmhook_running",
  on = "RUNNING",
  command = "demohook.rb",
  arguments = "VM RUNNING $VMID",
  remote = "no" ]

A little explanation won’t hurt:
“name” is the name for the hook and can be anything, but it is useful to provide a descriptive enough name in case something goes wrong with the script or the hook itself – the name parameter will be displayed in the logs.
“on” means the state this hook is bound to, in our case running. Other states include create, shutdown, stop, etc. For the complete list, please consult the documentation.
“command” is the script file that gets executed when the hook is triggered. We use a Ruby script “demohook.rb” since Ruby is automatically installed with OpenNebula and quite easy to read.
“arguments” is probably the most important part of hook’s definition, because we can access VM template variables with $ sign. Hence, $VMID means the ID of the Virtual Machine that just entered the RUNNING state.
“remote” is currently set to “no”, because we generic cialis online want the “demohook.rb” script to be executed on the head-node, where our Java program is running. By setting this to “yes”, the script is executed remotely (on the host where the VM was scheduled to run), which can also be quite a powerful feature of OpenNebula.

2) Create the Ruby script demohook.rb and place it in /usr/share/one/hooks or $ONE_LOCATION/share/hooks, depending on your installation type:

#!/usr/bin/env ruby
require 'socket'

    puts("3 arguments required")
    sck = TCPSocket.new("", 3344)
      sck.write(ARGV[0] + "_" + ARGV[1] + "_" + ARGV[2])
rescue Errno::ECONNREFUSED
  p 'TCP socket connection refused on port 3344 - is Java socket server running?'

The script is also available in the source zip file. You can change the port number (3344), but please make sure you also change the port in Java program accordingly (file HooksListener.java).

3) Restart OpenNebula by issuing the following command:
$ sudo service opennebula restart

OK, we have just installed the OpenNebula hook along with the script, which will just pass the arguments to a server socket on port 3344. Now we need the socket listener to trigger the execution of our business logic (get VM’s runtime information and send it via e-mail to the system administrator). We will use Java programming language to utilize the Java RPC API and take full control of the rest of the process. This approach allows us to keep the Ruby script as simple as possible and free of any specific business logic, as Java code is usually easier to maintain and extend. We could, of course, limit ourselves to Ruby only and put everything in “demohook.rb” script. It doesn’t even have to be Ruby, it could also be Python, plain old shell script or maybe even PHP. But to demonstrate this example better, the socket connection between Ruby and Java seemed like a good idea. OK, let’s take care of the last part…

4) Download and extract this zip file on your OpenNebula head node, preferably inside your home directory. We recommend you put all the files in opennebula_hooks_demo subdirectory, apart from the “demohooks.rb” script, which you should put as per #2 above (if not created already). Now open the build.xml file and change the “basedir” property (line #1) to whatever your home folder is:
<project name="OpenNebulaHooksDemo" basedir="/YOUR_HOME_FOLDER/opennebula_hooks_demo" default="main">

5) We’re almost done, we just need to change of couple of settings:
Open the file /YOUR_HOME_FOLDER/opennebula_hooks_demo/src/si/cloud/opennebula/MailSender.java and specify your e-mail server, username, password and other information required by javax.mail transport:
private static final String MAIL_HOST = "smtp.gmail.com";
private static final int MAIL_PORT = 465;
private static final String MAIL_USERNAME = "[email protected]";
private static final String MAIL_PASSWORD = "mypassword";
private static final String MAIL_FROM = "[email protected]";
private static final String MAIL_TO = "[email protected]";

To retrieve VM’s runtime information from OpenNebula, we use Java RPC API. Full API documentation can be found here, but for this example to work, you just need to double-check the basic connection settings. Open the file /YOUR_HOME_FOLDER/opennebula_hooks_demo/src/si/cloud/opennebula/OpenNebula.java and change the settings accordingly.
private static final String ONE_RPC_HOST = "localhost";
private static final String ONE_RPC_PORT = "2633";
private static final String ONE_ADMIN_USERNAME = "oneadmin";
private static final String ONE_ADMIN_PASSWORD = "oneadmin";

6) Make sure Java 6 (or above) and Apache Ant are installed on your head node. You can run the example Java program by issuing the ant command in the project’s base directory, e.g. /YOUR_HOME_FOLDER/opennebula_hooks_demo (which should also be the directory containing Ant’s build.xml file).

If you encounter any problems setting up the system, please feel free to send me an e-mail or post a comment below.

, , , , Hide

Windows Azure Cloud Appliance

During the Microsoft Worldwide Partner Conference 2011, Microsoft introduced Windows Azure Cloud Appliance or so called “private cloud in a box” to its partners. The Azure appliance can be used by Microsoft partners to leverage new cloud services in data centers. The available server capacity will be maximally utilized for the usage of business applications. The described service combines manswers penis enlargement surgery Windows Azure, Microsoft SQL Azure and hardware fitted for Microsoft infrastructures. The service is primarily available for developers, end-users, service providers and resellers, who run applications in a private or hybrid cloud in their own data center. Customers can manage their private cloud via a portal located in the Microsoft System Center. You can read more here.

, Hide

According to the recent survey by Ipanema Technologies and Orange Business Services the majority of enterprises will move to hybrid cloud in the next four years. The study of 150 enterprise CIOs and IT Directors found that 66% of them plan to use the hybrid cloud delivery model in the years to come.

The primary driver for adopting the cloud model seems to be cost reduction, which can reach significant numbers. Moving (some) IT services to the cloud and consolidating over designed server infrastructure lowers operating expenditure (OPEX) arising from power, cooling, maintenance and other costs. As for the SME sector, adopting the cloud model early on can significantly lower capital expenditure (CAPEX), which allows start-ups to optimize their initial (seed) funding, achieving a faster time to market (TTM).


, , Hide

IMPACT 2011 – Participation of our team

On the second day of conference our Cloud Computing Centre had presentation with the topic “Mobitel’s Next Generation Order & Service Provisioning Management Platform” in cooperation with Swami Chandrasekaran.

Presentation was presented by our SOA experts Matej Hertis and Martin Potocnik who talked about how Mobitel created a new customer Order & Service Provisioning Management Platform. Highlights of the session included how Mobitel reduced its order fulfillment cycle time, gained visibility into order’s and automated manually-intensive processes.

I also would like to mention that it’s wonderful to see other Slovenian companies which participated and added extra value to this conference. The presentations are following:

  • Creating Production IBM BPM Deployments – A Case Study at Mobitel; Zoran Mladenovic Mobitel, Tomaz Paternoster IBM Slovenia, Ritesh Saxena IBM
  • Adopting SOA/BPM as a Competitive Advantage During Economy Downturn; Edvard Krasevec, Viator&Vektor, Jurij Rejec, A-Soft d.o.o.
  • Adoption of WebSphere BPM in Slovenian Electricity Distribution Companies; Andrej Bregar, Klemen Sorcnik, Matej Nosan, Informatika d.d.
  • Customer Panel: Meet the Wizards of System z – Stories of Revolution, Consolidation & Victory; Andrej Bregar, Informatika d.d. ,Marcia Harelik, IBM, Georg Huettenegger, Credit Suisse AG, Laura L. Olson, IBM, Michael Lange, Huntington Bank, Thore Thomassen, Storebrand ASA, Walker Miller, Huntington Bank

Swami Chandrasekaran, Matej Hertis and Martin Potocnik speaking at Impact 2011Matej Hertis speaking at Impact 2011Martin Potocnik speaking at Impact 2011



, , , , , , Hide

IMPACT 2011 – Conference has been kicked off

On Monday, April 11th 2011, IBM officially started one of the biggest international conferences, Impact 2011: Optimize for Growth. Deliver Results.
The conference was opened by IBM executives Jon Iwata and Nancy Pearson who took the audience on a journey through IBM’s 100 years of renovations. In this way the session focused both on IBM’s history of innovation, and on how this history positions IBM to help business become more agile in today’s marketplace.

, , , , , , , , Hide

More and more enterprises are nowadays moving applications to the cloud to modernize their current IT asset base or to prepare for future needs. There are several strategies for migrating applications to new environments. In this blog, we shall discuss a phase-driven step-by-step strategy for migrating applications to the cloud.

One of the key differentiators of AWS’ infrastructure services is its flexibility. It gives businesses the freedom of choice to choose the programming models, languages, operating systems and databases they are already using or familiar with. As a result, many organizations are moving existing applications to the cloud today. The AWS cloud brings scalability, elasticity, agility and reliability to the enterprise. To take advantage of the benefits of the AWS cloud, enterprises should adopt the previously mentioned migration strategy and try to take advantage of the cloud as early as possible. Whether it is a typical 3-tier web application, nightly batch process, or complex backend processing workflow, most applications can be moved to the cloud.

It is true that some IT assets or applications currently deployed in company data centers might not make technical or business sense to move to the cloud. Those assets can continue to stay within the organizations’ walls. However, we strongly believe that there are several assets within an organization that can be moved to the cloud with minimal effort. The step by step, phase-driven approach helps you identify ideal projects for migration, build the necessary support within the organization and migrate applications with greater confidence.

A successful migration largely depends on three things: the complexity of the application architecture; how loosely coupled your application is; and how much effort you are willing to put into migration. We have noticed that when customers have followed the step by step approach and have invested time and resources towards building proof of concept projects, they clearly see the tremendous potential of AWS, and are able to leverage its strengths very quickly.

Phase-driven step-by-step strategy for migrating applications to the cloud

, , Hide

Operating systems and vertical scaling

Scalability in general can be achieved through vertical or horizontal scaling. Vertical scaling also known as scaling up means adding more resources to the same machine. Horizontal scaling is also known as scaling out, which means creating more instances of the same machine. Scaling in IaaS Cloud is typically achieved through horizontal scaling. This can be done manually by users provisioning new virtual machines or automatically according to certain infrastructure metrics. Capacity provisioned this way can never exactly fit the software needs and that is why vertical scaling is a very desirable feature to be supported by IaaS Cloud. With automated vertical scaling, more memory and CPUs could be added to the virtual machines on the fly. Capacity provisioned this way would better fit the software needs.

Adding memory and CPU on the fly is often referred to as hot add. Windows operating systems supporting this feature are listed below. None of them supports hot removal of memory or CPUs. On the other hand hot adding and removing CPUs and memory is well supported in Linux operating systems – especially in newer versions of the Linux kernel.

Operating system Hot add memory Hot remove memory Hot add CPU Hot remove CPU
Linux OS with CPU hotplug and memory hotplug support (SLES 11, RHEL 6, and others) Yes Yes Yes Yes
Windows Server 2008 Datacenter Edition R2 x64 Yes No Yes No
Windows Server 2008 Enterprise Edition R2 x64 Yes No No No
Windows Server 2008 Datacenter Edition x64 Yes No Yes No
Windows Server 2008 Datacenter Edition x86 Yes No No No
Windows Server 2008 Enterprise Edition x64 Yes No No No
Windows Server 2008 Enterprise Edition x86 Yes No No No
Windows Server 2003 Enterprise Edition x64 Yes No No No
Windows Server 2003 Enterprise Edition x86 Yes No No No

, , Hide

Oracle has just filed JSR 342, the main JSR for Java Platform, Enterprise Edition 7. Emphasis within this JSR lies in emerging web technologies, cloud computing and overhaul of the JMS API.

The main focus of Java EE 7 is improved support for cloud applications. New features allow simple support for multi-tenancy, where same application modules execute in a variety of different environments. Versioning support is also added which now allows execution of different versions of same applications within the application server. Support for non-relational databases (NRDBMS, also called NOSQL databases) is also included for more scalable cloud data storage.

Jerome Dochez, Oracle GlassFish Architect, also spoke about greater need for tighter requirements between resource and state management, better application isolation and common management and monitoring interfaces at London QCon 2011 (PDF slides of the talk).

However, full modularity solution is not going to be available in JavaEE 7 yet, since full modularity features will not be making the Java SE 7 specification. Full modularity of applications and versioning is planned for Java EE 8, tentatively planned for December 2013.

Among other changes in Java EE 7 is new JAX-RS 2.0 (JSR 339) which mentions asynchronous and MVC support, client API and support for new media features.

Java Server Faces will also be receiving an overhaul (JSR 334) with added support of the new Expression Language (JSR 341) which will put heavy emphasis on new HTML5 features, including forms, audio, video, new Heading and Sectioning content model and Metadata content model. JSF 2.2 will most likely be released separately of JavaEE 7 since JSR filing notes target Java EE 6 platform together with Servlet improvements in JSR 340.

The final currently known change is the JMS messaging API overhaul (JSR 343) which will allow for better integration with application servers, add standardization for some common vendor extensions and simplify development.

The umbrella JSR has passed initial review on March 14th and is scheduled for release by the end of 2012.

, , , Hide

Older posts >>