“As a research institute with a long legacy in research and development on Distributed Computing Infrastructures, we take the opportunity to provide the academic community with an implementation of OCCI ontop of libvirt.” — Alexander Papaspyrou (@papaspyrou), Distributed Computing Virtual Laboratory at the Robotics Research Institute.
“Within the UK-JISC funded project “Flexible Services for the Support of Research“, there will be an open source implementation developed of OGF OCCI for Eucalyptus. This will be developed by the project and contributed to the community.”
Platform Computing developed an implementation of the OCCI protocol/API for the German Research Project DGSI. As part of the Service Sharing Facility (SSF) the occi module is an OCCI implementation written in Python.
Usage is as simple as just saying ‘import occi’ in your code. Since python is an interpreted language and can be easily bind to other programming languages like C/C++ or Java this implementation can be used for several purposes. The module can be installed by either downloading the source and run ‘python setup.py install’ or by running ‘easy_install pyssf’.
The implementation includes demos for Job Submission (SaaS/PaaS), a KeyValue store (PaaS) and of cause a skeleton implementation of the OCCI infrastructure model which only needs to be bound to your hyper-visor to create your IaaS based Cloud.
- OCCI compliant implementation in Python using as a WSGI application
- Can be used by and device/programing language which is able to understand HTTP
- In addition to the Renderings defined by the OCCI Specification is comes with an HTML rendering for easy monitoring using a Web-Browser
- Easy to use – to give your applications a RESTful OCCI compliant interface (“RESTify your apps”)
- Build upon Tornado Web for high-performance request handling
- Multi-user ready, easy to integrate with OpenId service or similar authentication/authorization services (including transport via SSL)
- Enables you to easily integrate several products, provide service interface to clients/customers, build your Cloud Service offerings
- Focuses on Integration, Interoperability, Portability and Innovation
- Documentation: http://pyssf.sf.net
- Source code: https://github.com/tmetsch/pyssf
- PyPi Package Index: http://pypi.python.org/pypi/pyssf
All work (c) 2010-2012 Platform Computing and (c) 2012-2016 engjoy UG (haftungsbschraenkt) under LGPL License.
Eucalyptus! OpenStack! LibVirt! Platform! OpenNebula! Apache Tashi!
The pace of development within the OCCI community has been excitedly ever increasing over the past few months. It’s not only been so within the group of people defining the specification but also in the many groups of people and projects implementing OCCI. In our last blog post we mentioned that Eucalyptus will soon have an implementation of OCCI from the good work David Wallom and his FleSSR team in Oxford are doing. In the post we also hinted at something related to OCCI and OpenStack.
As you might be aware, OpenStack is one of the most exciting and vibrant open source Cloud activities on going currently. The OCCI working group has been engaged with OpenStack over the past 3 months with the aim of contributing an implementation of OCCI and we’re happy to say that this will happen with the “Bexar” release of OpenStack. Incidentally, that’s synchronised with the release schedule of Ubuntu 11.04. the You can see the OCCI blueprint on the OpenStack site, which will serve a point of communication for the implementation work.
Not only will OpenStack receive an implementation of OCCI but one of the mainstays of infrastructure management frameworks, libvirt, will also have an implementation of OCCI. This work is being carried out by a team lead by one OCCI community member, Alexander Papaspyrou from TU Dortmund University, Germany.
Platform Computing will provide an OCCI implementation for a German Research Project, DGSI, which allows developers to easily extend their existing applications with an OCCI compliant RESTful interface (RESTify your apps).
Given that OCCI is also implemented in OpenNebula and Apache Tashi (via the SLA@SOI implementation) amongst others (we’re running out of space for this post!), OCCI is fast becoming the API that can provide interoperability between the major Open Source infrastructure management frameworks.
As ever, the OCCI group is always hugely enthusiastic, welcoming and very supportive to people and groups of all types wishing to get involved with OCCI, whether that is through specification contributions or new implementations of it. Curious? Then head on over to IRC (irc.freenode.net #occi), drop a mail on the mailing list or ping some of us on twitter (@dizz, @befreax, @monadic, @papaspyrou).
Stay tuned for more news on OCCI and more implementations of it!
So yes we’ve been quiet but as they say “still waters run deep”. We in OCCI have been deep and active on everything from refining the Core model down through the infrastructure specification and out through the HTTP rendering specification document and, well, things couldn’t be healthier! Following an superb half-week at OGF30 there’s even more great OCCI-related news to share. Coming into OGF30, I was aware of seven OCCI implementations and coming away I knew of twelve! Most notable of those 12 is Eucalyptus who will soon have an implementation through the good work David Wallom and his team in Oxford are doing. You might have noticed a new logo (above) too contributed by Sam You might have also seen the various OCCI articles in the latest ERCIM news and if not go check it out! And there is more, especially in areas related to OpenStack, we’re only dying to share with the community but, soon, very soon you’ll know more!
Much of this work in advocating the adoption and support of OCCI has been carried out by our tireless co-chairs; Thijs, Alexis and I, as well superb support from many people within OGF including Craig Lee (OGF president) and Alan Sill (VP of Standards).
So what was I doing at OGF30 other than working hard and having great fun at the same time with the OCCI guys? Well I presented on the work we’re doing in SLA@SOI. I presented on “Standards-based, SLA-enabled Infrastructure Management”. You can check the presentation out here and I must apologies to those present if I bombarded you with architecture. At least I showed a real live demo! The live demo showed a number of SLA-guaranteed services all managed by OCCI. Incidentally, the OCCI implementation used is open source (BSD) and available on sourceforge. For those not present there’s some screen grabs at the end of the presentation. It’s implemented in the awesome Grails so if you’re interested, take a wander over there. Some interesting pieces coming from SLA@SOI related to OCCI include; a jClouds OCCI implementation, OCCI extensions on Advanced Scheduling and Monitoring.
So if you want to check out what’s going on in OCCI for yourself, why not have a look through the wiki, svn (it’s latex but you can build it ). Come over to IRC (iric.freenode.net #occi). There’s always an OCCI person or more hanging out there and ready to talk.
Finally, as if that wasn’t enough, there was a very interesting DCI-Fed session held where we discussed various use-cases. DCI-Fed (mailing list, wiki) , from a Cloud Computing perspective, is really interesting and exciting. It looks into how various different Cloud Computing providers can interoperate to provide federated services to their clients. Certainly the future!
Background: For a period of months, SLA@SOI and RESERVOIR have been collaborating with a goal of architectural and technical integration. Both SLA@SOI and RESERVOIR are each multi-million Euro funded projects under the European Framework Programme 7.
Collaboration activities need communication and commonly agreed structures put in place. From a management view, we, the collaborators, established such elements early in our efforts and this helped us greatly. However, what we then needed was to have complimentary and requisite collaboration from the technical view. This meant having a common and agreed means to communicate technical syntax and semantics between both SLA@SOI and RESERVOIR’s infrastructural layers. These means were supplied to our efforts in the fashion of the Open Cloud Computing Interface (OCCI) specification. It was established early on that the OCCI specification would be a suitable baseline to cater for the horizontal integration of SLA@SOI and RESERVOIR’s infrastructural service layers and hence frame our collaboration at a technical level.
The Open Cloud Computing Interface (OCCI) is a working group formed within the Open Grid Forum. The motivation for initiating this group was the lack of any open standard for the Infrastructure as a Service (IaaS) model-based clouds. The open standardisation process is driven by the following motives:
- Interoperability: the ability to enable different systems integrate with each other. This is an absolute in use cases related to the Intercloud, where two distinctly separate and independent IaaS provider work and orchestrate seamlessly from a customer perspective.
- Portability: the need for easy code reuse in end-user application like cloud clients or portals. Enabling this allows migration from on IaaS to another with minimal impact upon the customer. Where migration through portability is provided and hence lock-in is a non-issue, this focuses the provider on offering compelling and attractive services, which due to the almost commodity-like nature of IaaS often implies competitiveness through lowering of service costs.
- Integration: the idea of “wiring up” IaaS with not only current and modern provider offerings but also to legacy resources and services.
With the focus of providing standardised interfaces to IaaS, the OCCI group defines a RESTful  protocol. The goal is to create a simple and elegant interface, which can be easily extended by 3rd parties, and the RESTful approach supports this.
OCCI is a boundary protocol/API that acts as a service front-end to an IaaS provider’s internal infrastructure management framework. It is OCCI that provides for commonly understood semantics and syntax in the domain of customer-to-provider infrastructure management. More so, OCCI is focused on the management of infrastructure hosted in the cloud, in effect, utility computing. The following diagram shows OCCI’s place in the communications chain:
To give this view further context within the collaboration, below we show how RESERVOIR and SLA@SOI would both, quite naturally, integrate together using OCCI as their means for IaaS interoperability.
Further details of this particular integration study were made available in the joint technical report.
Open Architectural Issues & Proposed Solutions
As OCCI was identified early as the interface specification that each project would implement, it was not particularly difficult to integrate architecturally. From our review of OCCI, there were a number of issues that might hamper this effort. There were questions raised regarding the suitability of using HTTP headers as a means to transfer serialised data over the network. Where as HTTP headers are a reasonable place to transmit data that has a small payload, inserting data here that has a large payload is not practical. Typically, the most common data that is transferred to OCCI clients are collections of VM representations. This is currently being addressed by the OCCI group.
Also there has been demand for alternative serialisation formats, other than HTTP headers. The OCCI working group is also now investigating and aim to specify how to represent the OCCI model as RDFa within XHTML documents. This would then allow OCCI serialisations rendered within a web browser. An advantage of exposing the attributes and relationships of OCCI managed resources through RDFa is that not only can a web browser consume and display the content but programmatic clients that can extract RDFa can be used to reliably extract data to perform automated tasks and not be subject to issues associated with screen-scraping. A consequence of supporting RDFa is that the OCCI model must be reified as a RDF ontology in order to support and validate RDFa declarations within a XHTML document. By defining a RDF ontology this too adds huge possibilities to the OCCI standard by not only providing a serialisation format that can support a richer and more extensible but could potentially further the cause of linked data and the semantic web.
From our collaboration work, it was found that an area that OCCI does not currently address is atomic, multiple resource provisionings. This means that with the current OCCI specification, it is only possible to provision one resource per request. For some use cases this is not sufficient, as they require that multiple resources be successfully provisioned through one request or not at all. The interim solution used by RESERVOIR was to utilise the Open Virtualisation Format (OVF) specification to express many resources within one request. This work is an example of how other open specifications can be integrated within the OCCI specification. In reference to the previously mentioned OCCI developments, a RDF serialisation, indeed RDFa, format could support and solve this current limitation within OCCI as RDF easily supports multiple resources per request due to its XML heritage.
In order for any provider to be SLA-enabled by SLA@SOI, that provider should ideally offer a means to monitor each service provisioned by its systems. In this case, the provider would offer a monitoring service in parallel to it’s service offering. The exclusion of monitoring considerations in the OCCI specification was found to be an issue when SLA-enabling infrastructural services that implemented the OCCI specification. Although OCCI does not currently offer a means to perform monitoring, other than periodic pull requests to retrieve individual resource metrics, OCCI does not preclude other monitoring specification being used. It is at this point where the two projects, as currently implemented, diverge and so to allow for seamless inter-operable SLA management across the two projects requires that both projects select, just as was done for IaaS management, a standard or common specification for monitoring. Within SLA@SOI, interacting with a service manager is currently performed using access to messaging bus powered by the open standard XMPP. Within, RESERVOIR monitoring information is accessed using the TCloud monitoring API. If the two projects are ever to be interoperable from an SLA management perspective, through horizontal integration then this difference in monitoring approaches needs to be addressed. The suggestion from the horizontal integration working group is two-fold. First, select an API-based monitoring specification that allows asynchronous notifications pushed from the provider. Second, from the learning of implementing the selected API, contribute back to the OCCI working group a compatible specification for an OCCI monitoring extension.
As already noted, a number of OCCI implementations are being actively developed. Once implementations are ready for consumption, it would be appropriate firstly nominate reference implementations and secondly, with those agreed reference implementation perform interoperability tests and report on the results.
Resulting from the collaboration activity, a number of outputs both completed and on going were achieved. A joint technical report entitled “Using Cloud Standards for Interoperability of Cloud Frameworks” was published. This introduced the two collaborating projects, OCCI and outlined a basic use case along with architecture on how the two projects can interoperate.
As RESERVOIR and SLA@SOI had vested interests in OCCI, this resulted in each project producing their own implementation of the OCCI specification. SLA@SOI has an infrastructure service manager, which allows the provisioning of infrastructural services atop its chosen provisioning system, Tashi. RESERVOIR also has exposed OCCI interfaces both at the Service Manager level, through the Claudia project’s implementation and the VEEM level, through the OpenNebula implementation. This potentially allows OCCI interoperation at not only in a horizontal fashion but also, in the context of RESERVOIR’s architecture, a vertical fashion.
As each project worked independently on their implementation of OCCI but still with OCCI as the vehicle of collaboration this resulted in each project supplying feedback to the specification. To date this is largely captured in the section on open architectural issues. It should also be noted that there is another implementation of OCCI in use currently. This implementation belongs to the Istituto Nazionale di Fisica Nucleare (INFN)  and was presented at OGF28.
By selecting standardised and commonly agreed interfaces, the integration of both architecture and technology was vastly expedited and simplified and reflects the benefits of standardisation. The use of standards as a tool for rapid and productive collaborations between large projects; be they EU-funded or commercial, cannot be understated. Standards allow for everyone to share a common baseline of functionality and level what would be otherwise an uneven, jagged technology landscape where vast amounts of time, funding and resources are spent just for basic communications to be achieved. With everyone sharing a common baseline of functionality, further functionality can be built upon that, for example in the case of IaaS, Interclouds of IaaS. IaaS cloud brokers with fail over intelligence can be rapidly developed. The horizontal integration working group provided good examples of how to quickly deal with deficiencies in specifications by not re-inventing the wheel, but rather, reusing existing standards e.g. OVF, TCloud. In general, this work showed that it is possible to let two large Cloud-oriented frameworks interoperate even with vastly different architectures and goals. This has the result of paving the way for possible proof of concept demonstrators that have functionality that is greater than the sum of its parts.
 Fielding, R.T.: Architectural Styles and the Design of Network-based Software Architectures, Doctoral dissertation, University of California, Irvine (2000).