Saying that computer forensics investigations are necessary in the cloud, or maybe especially in the cloud – to assess risk correctly and arm yourself against attacks effectively – may seem like stating the obvious. However, the scientific community has ignored the issue of forensics in cloud environments thus far. Interestingly, some authors pointed out as early as 2009 a lack of publications on the cloud security problem and on corresponding legal issues . This paucity of information was confirmed by other publications  . Despite this, the topic is still largely overlooked and a huge amount of work remains for scientists, especially in the field of incident handling in cloud environments .
At the same time, many companies are investing heavily in new cloud environments and then migrating services to the cloud. Although debate is increasing on security and data protection problems, the apparent advantages for user seem to take priority.
Problems in Cloud Forensics
One classic problem in forensics is the fact that the evidence is generally characterized by its fragility and volatility. When you are collecting new evidence in particular, you must be careful not to falsify or even destroy the evidence. This problem is not restricted to the digital world but applies equally to, say, forensic medicine. The advantage of collecting digital evidence has always been that the investigator can create a one-to-one copy of the data medium in many scenarios before starting to analyze the evidence. This approach is effective in preventing the destruction of potential evidence by the analysis process, but, in a cloud environment, is typically not so easy to do.
Depending on the service model (SaaS, PaaS, or IaaS ) and the extent to which the Cloud Service Provider (CSP) cooperates, users may be able to access potential sources of evidence that are absolutely necessary for an investigation. However, the volume of this evidence is typically very limited, which prevents a complete resolution of the facts of the case.
The context in which the evidence exists is another issue. External forensics investigators might not, at first glance, be able to see how the existing pieces of evidence from the various components of the cloud system correlate. This is also true of legacy IT systems, but the cloud, with its international and cross-national structures, is all the more difficult to analyze and evaluate.
Securing the chain of custody for the evidence is also difficult. The CSP hands over the potential evidence to the user – but how can the user be sure that the evidence is genuine and has not been injected by a malicious third-party? In this context, the term data provenance  becomes extremely meaningful: It covers the origins of the piece of data and how it might have been modified, that is, who has viewed or modified piece of data at a particular point in time.
Additionally, using automated forensic tools in today's cloud environment is difficult or even impossible. You need to individually view and process each case individually because of the lack of standards. And, even if standards did exist you would not be able to rely on the CSP to implement all of them. The danger of jeopardizing your own monopoly on the market would be too big.
Forensics in SaaS Applications
Software-as-a-service (SaaS) applications are becoming increasingly popular. Offerings from Google and Salesforce, for example, show how efficiently and easily applications can be migrated out into the cloud. In terms of application security, CSPs increasingly understand that users set much store by the secure implementation and authentication . Paradoxically, very few CSPs take a proactive approach to incident handling. You can expect the current assurances of cloud security to be followed by a phase in which users learn through painful experience that their cloud-based data wasn't totally secure after all.
In other words, today's crop of SaaS applications offers virtually no opportunity to perform forensic investigations. To demonstrate this, we will look at an example that may be fictive but is nonetheless not too far from today's practical SaaS applications.
Corporations can outsource the email service to a SaaS provider, which offers various benefits. For example, you don't need to purchase and manage your own email server, which saves both procurement and personnel costs. From now on, the CSP is responsible for the security of the application – not your own administrator, who simply has to manage the user interface. Staff can then configure their email clients to use SSL/TLS to retrieve email directly from the CSP and to use the CSP's email server in the cloud to send email. Users also can access the CSP's web front end as a web mailer.
Unfortunately, users desktops are continuously hit by malware despite the use of anti-malware solutions. Malicious programs can sniff the password for access to the email servers in the SaaS cloud and send it to an attacker. Incidents of this kind are nothing special; in fact, they are part of the daily grind. The employee typically doesn't notice a thing and isn't even suspicious, because the email client just keeps on working as before. Even if the attacker uses the stolen credentials to log in to the victim's email account via the web front end, the victim typically will not notice because logging mechanisms giving the user the ability to identify such incidents are not usually configured.
Some CSPs show you the last IP addresses used to access the email account; however, this assumes that the user logs into the web front end and that the address that is recorded is not the address assigned to the user. In the case of an insider attack on the same network, or if the user simply can't interpret the information, detecting the attack becomes very difficult.
The attacker needs to leave noticeable traces in the victim's email account before the victim starts to become suspicious. This might be the case if the attacker were to delete a message from the inbox or to manipulate a message in some noticeable way. The victim would be surprised in this case, but it might be the CSP's fault if some kind of technical glitches occurred. The user has no chance to confirm or refute this theory. In other words, a method of guaranteeing accountability in cloud services is also missing .
If users do suspect somebody of manipulating their email, they still don't have an option for performing forensics investigations themselves. Their only option is to contact the CSP's hotline and request that the incident be investigated. Depending on the CSP's approach, the user might finally find out that their account has been compromised.
But what would the results of this be? The user has no way of finding out which data the attacker has viewed because they have no access to logging data. In the worst case, the attacker might have copied the whole account and simply published it on the web. Another possibility is that the attacker might sell the data to a competitor – all of this is pure speculation to the user.
Outgoing email is another issue: It is very difficult to track the correspondence that went out to potential business partners, because an attentive attacker would probably delete outgoing and incoming messages. In other words, an intruder could attack systems belonging to business partners and colleagues in the name of the victim. They could also send false offers to business prospects or even manipulate account data.
Of course, all of these attacks would be possible with a legacy email service that you ran yourself. But, in that case, a retrospective investigation would be much easier because systems and processes could be established to support or facilitate forensics investigations. In a cloud environment, the user is primarily dependent on the CSP's cooperation. This situation could cause substantial legal problems or delays in the case of globally active CSPs.
New Approaches to Cloud Forensics
Basically, three different components must be considered for investigations in cloud environments: the user's client system, which is connected to the cloud service; the network layer through which the data was exchanged between the user and the cloud; and the virtualized cloud instance – independent of the service model. The sources of evidence from all three components must be correlated and organized in a common context to make it possible to discover the details of the incident. Unfortunately, this is very difficult to do in real-life scenarios, because the investigator typically doesn't have the necessary evidence.
As this example shows, traditional methods of digital forensics are no longer useful, or only partly useful, in cloud environments. Whereas a one-to-one copy of the data medium previously was created in the case of an incident, investigators cannot resort to that method in today's virtualized environments. The main issue here is that the forensic investigator typically doesn't exactly know where the data affected by the incident is located. This applies to the SaaS scenario as well: The precise storage location of the email is invisible to the user and attempting to identify it would cause both security problems and data protection problems.
Additionally, the CSP will never grant the user physical access to the data medium because it could also contain data belonging to other users not designed for disclosure to a third party. To avoid cases like this, it is a good idea to add a clause to your Service Level Agreement (SLA). However, we are unaware of a CSP that offers this kind of provision in their SLA.
In infrastructure-as-a-service (IaaS) scenarios, increasing virtualization of physical machines also offer some benefits: Snapshot technologies make it possible to create complete images of the virtual machines, which is an enormous advantage for digital forensics. But one question remains unanswered – whether or not the image was created in a valid way. There is no simple way for a user to verify this. Documenting the technical process of the scope of a snapshot process will definitely be a big help; many CSPs work with a modified version of the hypervisor, and you should not rely on the manufacturer's process documentation for this reason.
Additionally, IaaS service customers are unable to access network components, making it impossible to access log files on routers, switches, firewalls, and the like in the course of the investigation. Thus, no evidence at all is available from the network layer, and only the CSP would be able to change that.
An interface that allows the customer to tap into the source of evidence would be a useful step in the right direction. Network components then could create individual log files tailored for virtual instances in IaaS scenarios. This would mean that only excerpts of the network traffic to and from the virtual instance would be recorded and provided to the customer, possibly even as a commercial value-added service. Also, the CSP would probably want to restrict the period of time in which log files are available to the customer to avoid wasting storage space. Of course, customers could only access evidence for their own virtual instance and not for any neighboring instances. Such an approach would offer a decisive benefit to the customer: In the case of a compromise, the customer could correlate evidence from the client with the network layer and the server (IaaS instance), assuming that the IaaS instance had written its files to an external logfile server at regular intervals. Otherwise, an attacker would be able to delete or modify the log files after successfully breaking into the IaaS instance.
To prevent attacks of this kind, the CSP could offer a virtual introspection service. The customer would need to sign an agreement allowing the CSP to run automated tools against the hypervisor at regular intervals to verify the system state of the virtual instance and create corresponding log entries. Attackers would not be able to manipulate these entries unless they completely compromised the hypervisor .
In SaaS environments, customers can only communicate with the service through a prebuilt API. In many cases, this process is handled directly by a web interface. Advantages and disadvantages to this approach exist, but the decisive thing is that customers need not concern themselves with the security of the application – that is in the hands of the CSP. The disadvantage is that the customer is restricted in terms of the flexibility of the application – that is, the customer cannot just add functionality that the CSP doesn't implement. One example of this is the feasibility of forensic investigations in SaaS scenarios, as the previous example showed.
However, options exist for changing this situation: The CSP could offer an additional forensics interface, which users could then leverage to trace specific access to records in the SaaS application. These records could comprise, for example, emails, customer data, and financial data. The cloud service customer would thus need to verify how and when users could access the data.
Such an interface could be part of a larger provenance framework that would log all read and write access to the records and serve up the results to the customer. In case of a suspicious incident, a user could query the API for corresponding access and thus more easily identify unauthorized access. At the same time, an automated evaluator could make sure that no unauthorized third-party access occurs.
For the CSP, implementing this type of interface would be easier than with other service models because the context of the data to be protected is clearly visible.
For example, if the ABI function storing a file is called in the scope of a PaaS service, the function call could be logged to record the fact that a file with a specific name was stored at a specific location. The further context of the file is unclear to the CSP. This is different for a SaaS service: The context of the data is clearly visible to the CSP, because it has implemented the functionality (e.g., email is sent, processed, and so on.)
The data format of this interface could leverage existing data formats, thus ensuring interoperability between various tools and frameworks. One popular format is DFXML by Simson Garfinkel , which is an XML format for describing forensic information.
Using an open data format would also offer the advantage that the existing forensics projects could be connected to this interface. In an ideal world, customers would thus be able to add a cloud service to their own in-house forensics framework.
To guarantee the requirements such as integrity, authenticity and confidentiality, existing XML frameworks could be used with DFXML for signing and encryption. This means that the log information would be output to the customer in the form of XML via an API and additionally signed by the CSP, or even encrypted if needed. However, there is no way of knowing precisely what information the customer might need for an investigation – this depends to a great extent on the scenario in question or on the SaaS platform and the critical assets.Finally, it is important to determine how long the CSP should retain data for each customer. Again, this value could be defined individually in the SLA. At the same time, the customer would have the option of extracting the data directly from the interface and integrating the data with its own logging server. On the server, the logs could be matched with the customer's own policy and evaluated correspondingly.
The rapid increase in new cloud services and their popularity will, in future, lead to systems, applications, or accounts being compromised in the cloud. Attackers are always at the leading edge of technology and are fully aware of the potential that cloud environments offer . The challenge is thus to support forensic processes for cloud environments, which necessitates cooperation between the customer and the CSP.
The issues examined in this article show that traditional methods and processes of digital forensics must be reconsidered, especially in terms of forensic investigations in cloud environments. It is primarily the task of the scientific community to develop new methods and processes that address the issue of forensics in the cloud.
That said, the CSPs really need to do their homework. Unfortunately, most CSPs currently don't see the potential that an interface of this kind offers to the user. This is perhaps less an issue of technical feasibility and more an issue of the financial overhead that such an implementation would cause for the CSP. The costs of the implementation could be passed on to the customer – if you want this kind of interface, you have to pay for it.
This approach is not unusual: Security costs money, and CSPs don't initially earn anything with it. As long as users allow CSPs to get away with this behavior, nothing is likely to change.
When CSPs start to depend on users, rather than vice versa, a paradigm change might occur. Until that happens, it remains to be hoped that the security mechanisms provided by the CSP and complemented by the customers' own mechanisms are robust enough to survive.