Emerging Cybersecurity Technologies

Emerging Cybersecurity Technologies

By: Amy Wees

CSEC670

June 9, 2013

 

Abstract: Advanced cyberattacks on the public and private sectors at the local, national, and international level have prompted an increase in funding and support for the study of emerging cybersecurity technologies.  The considerations for this paper are to discuss the emerging technologies and strategies that can be integrated across the public and private sector to improve cybersecurity on a local, national, and international level.  New technologies need to dynamically assess networks real-time such as with the use of Remote Agents and Real-time forensic analysis.  These technologies also need to make the attack space less predictable and constantly evolving such as through the use of moving target defense.

Emerging Cybersecurity Technologies

The E-government Act of 2000 was signed by President Bush to move toward a more 24-7 government.  The dream was to eliminate the need to have to stand in line at the DMV for half a day just to pay annual vehicle registration fees (Barker, 2011).   Security was certainly a concern, but it was not at the forefront of the move as government agencies would go through massive changes in equipment, manning, and practices in order to move information and programs online.  Now, over a decade later we still see moves and changes taking place, such as the department of Veterans Affairs recently moving all of their applications, forms and records online.  The expensive cost of getting the government caught up was expected with such an overhaul in the system; however, the U.S. should have spent more on cybersecurity and had to learn this lesson the hard way.  The recent breaches by Anonymous into the FBIs and Department of Homeland Security’s systems were disappointing as these were the two government agencies tasked with taking on cybercrime (Novasti, 2012).  How does the government expect to control the protection of SCADA systems for critical infrastructures as recently proposed by congress if they cannot protect their own assets (Associated Press, 2012)?  Annual Federal Information Security Management Act (FISMA) audits still point to lax practices (US SEC, 2011).

In 2009, President Obama called for a malware-based cyberattack against Iran’s nuclear system computer networks through the use of the Stuxnet worm, which was noted as the first use of cyber as a weapon by the US.  More recently, Iran has experienced more cyberattacks linked to their nuclear systems and operations.  (Airdemon, 2010).

Advanced Persistent Threats (APT) have changed the cybersecurity game as APT attacks can be so sophisticated that many well-known techniques for detection and mitigation may not be effective against them.  An APT that utilizes targeted exploitation code leveraging zero-day vulnerabilities will not be detected by Intrusion Detection Systems and Anti-virus products (Casey, 2011).  The issue is that once the malware is detected, it might not be obvious as to how long the malware was operational.  Further, in the case of an APT, it cannot be determined if the discovered malware is the entirety of the compromise.  The APT might leverage multiple malware tools to maintain access by state-sponsored attackers.  With the aforementioned attacks on critical infrastructures and government systems, as well as an overall increase in complexity of cyberattacks, governments on an International level have considered cybersecurity to be more crucial than ever before.

The considerations for this paper are to discuss the emerging technologies and strategies that can be integrated across public and private sectors to improve cybersecurity on a local, national, and international level.  New technologies need to dynamically assess networks real-time such as with the use of Remote Agents and Real-time forensic analysis.  These technologies also need to make the attack space less predictable and constantly evolving such as through the use of moving target defense.

Moving Target Technologies

Moving Target (MT) technologies aim to constantly change the attack surface of a network, increasing the cost for an attacker and decreasing the predictabilities and vulnerabilities present at any time (NITRD, 2013).  The problem of most networks today in terms of cybersecurity is that they are static and an easy target for an attacker to analyze over time and strategize on the best way to capitalize on vulnerabilities.  Moving target defenses allow the network to consistently change in configurations and environmental values (Grec, 2012).

For example, an organization could change the network IP addresses, operating systems, open ports and protocols, and many other areas of the environment.  This way when an attacker scans the network, the scans are not consistent, and if an attack is launched, chances of successful penetration are severely reduced because of the dynamic changes in the environment.  The MT defense could also react to an attack by reducing the areas of the network known to or accessed by the attacker (Grec, 2012).

The most difficult challenge in using MT is in maintaining an operational network for users during the changes and minimizing costs involved.  The JumpSoft Company has created a subscription based MT defense package called “JumpCenter.”  JumpCenter uses reactive and adaptive automated systems that reduce the attack surface.  The concept behind JumpCenter and MT defenses is to maximize the cost and risk to the attacker.  JumpCenter keeps the network operational by deploying in the application layer. The application layer is more exploitable as it is updated regularly through vendor releases which are exploitable.  JumpSoft adds the incentive that downed applications are a harder impact on the mission because the loss of one application can bring down the business (JumpSoft, 2013).

Government Support of Moving Target Technologies

In January of 2011, the Presidential Council of Advisors on Science and Technology sponsored the work of the Networking and Information Technology Research and Development (NITRD) program.  NITRD has identified emerging technologies such as MT as a Federal Cybersecurity game change research and development project (NITRD, 2013).  The government’s efforts to support NITRD and other research partners in developing MT technologies supports the efforts of the public and private sectors to redefine security in the cyber domain.

For example, in 2011 Professor Scott DeLoach of Kansas State received a $1 million grant from Air Force Office of Scientific Research to study MT (Chabrow, 2012).  Intelligent defenses can change the military reactive position on cyber to an active position, giving them the upper-hand on the adversary.  If military networks can be made unpredictable through the use of MT, the chances of cyber-attack and APTs are lessened.

Remote Agent Technologies

Remote agents, also known as mobile agents, can actively monitor a network’s security.  Active monitoring is necessary because a network that is not updated with the latest patches has shown to be reactive and ineffective against today’s cyber threats.  Additionally, large networks are nearly impossible for a system administrator to successfully monitor as most are made up of multiple nodes, each with constant system variations and users (Tripathi, Ahmed, Pathak, Carney & Dokas, 2002).  Remote agents can conduct centralized testing of network security from a remote client or server without a large manpower or travel cost requirement.  Most importantly, remote agents can run network tests without using unsecure firewall protocols (UMUC, 2012).

Currently, many organizations use network monitoring tools based on SNMP or the occasional execution of scripts built based on network threats which require tedious and complicated updates in order to remain current and valid.  Both SNMP agents and script monitoring procedures offer limited functionality and require specially trained administrators to comb through logs and write updates (Tripathi, Ahmed, Pathak, Carney & Dokas, 2002).

In response to these network monitoring difficulties, a team of students at the University of Minnesota worked under a grant from the National Science Foundation to develop a framework for mobile agent network monitoring using the Ajanta mobile agent system.  The Ajanta mobile agents can remotely filter information and alter system functions.  They use a centralized database to detect and compare system events to ensure policies are enforced.  Using Ajanta, administrators can securely make changes to an agent’s monitoring and filtering rule sets as well as dynamically remove or add new agents to an area of the network based on events triggered.  The model presented contains different types of agents that can monitor, subscribe, audit or inspect.

Perhaps the largest difference between the traditional SNMP monitoring systems and a remote agent system is the ability of a remote agent to relate one event with another in the system and then generate an alert in the log file and raise awareness or threat levels of other agents.  For example, if one agent detects a user logging in with multiple accounts and another auditor agent detects a subsequent remote or console login in the event registry, a password or security compromise can be detected.  In another example of a system reaction based on an agent, an auditor agent is sent to the login event subscriber by a management station.  When a root login event occurs and passes a predefined threshold, an alert is sent back to the manager to raise the alert level on the system (Tripathi, Ahmed, Pathak, Carney & Dokas, 2002).  All of this can be done without a system administrator’s intervention or brain power.

Government Support for Remote Agent Technologies

The government can benefit from the advancement of remote monitoring capabilities as the largest and most complex networks are government owned and operated.  There are many coalition military networks that cross the boundaries of multiple countries.  The monitoring and security of these government defense networks is at the best interest of everyone involved.

The ability to monitor classified defense networks to this level of clarity across International domains could aid in preventing insider leaks such as that of the Bradley Manning leak of military intelligence data to Wikileaks in 2010.  Although Manning was prosecuted, Wikileaks founder Assange has yet to be prosecuted for publishing classified material on the Internet (Wu, 2011).  Until international cyber laws and jurisdiction are better defined, it is in the best interest of all governments to find ways to successfully and dynamically monitor their networks for signs of attack or breach.

Real-Time Forensic Analysis

The use of computer forensic tools in criminal proceedings has proven to be necessary for making a case in today’s digital world.  Also related to network monitoring is real-time forensic analysis which is an investigative approach to maintain situational awareness and continuous observation of the network (UMUC, 2012).  While remote access monitoring actively monitors the network and takes necessary action to correlate threats and increase defenses, real-time forensic analysis allows for an incident to be reproduced and the cause and effect of the event to be analyzed further (UMUC, 2012).

A Network Forensics Analysis Tool (NFAT) prepares the network for forensic analysis and allows for ease of monitoring and convenience in identifying security violations and configuration flaws.  The information found when analyzing network traffic can also contribute background data to other events (Corey, Peterman, Shearin, Greenberg, & Van Bokkelen, 2002).

In addition to monitoring the network, network forensics has many practical uses. For example, health care agencies fall under the Health Insurance Portability and Accountability Act, which requires that information passed between networks be monitored.  Although all of the information provided by a NFAT may not be necessary, it is better to have more information than not enough in legal situations.  NFAT can also allow for recovery of lost data when other back-up methods fail or repeatable analysis of traffic anomalies or system errors (Corey, Peterman, Shearin, Greenberg, & Van Bokkelen, 2002).

Government Support of Real-time Forensic Analysis

Government support of real-time forensic analysis is more obvious in the state and federal criminal justice sectors as forensic analysis is a regular part of legal proceedings and police agencies have expanded to include entire divisions devoted to computer forensics.  The question remains as to whether the government from a local to international level should be concerned with real-time forensic analysis outside of the criminal justice realm?  Forensic analysis makes sense from a network defense perspective as governments can learn more about emerging threats by conducting an in-depth analysis of them.

In 2006, , the National Science Foundation and DARPA funded a project at Columbia University to create an Email Mining Toolkit (EMT) in support of law enforcement and other government research.  The EMT allows for email traffic to be analyzed for outside communications, social interactions, and specific attachments.  According to the report, EMT is in use by many organizations (Stolfo, Creamer, & Hershkop, 2006).

Since 1999 DARPA has funded numerous information assurance experiments using live red, blue, and white teams to simulate attackers, responders, and users during cyber-attack events such as denial of service, malware, and other threats known to be in use by the adversary based on intelligence data (Levin, 2003).  Real-time forensic analysis has allowed for early detection and analysis of the red team efforts by the blue team and has contributed to lessons learned for future responses.

Conclusion

The liability to protect public and private assets on a local, national, and international level cannot fall solely on the government.  Through the cooperative use of government, scientific, and academic programs, emerging technologies can be brought to the forefront to secure cyber assets dynamically and real-time.  Increased and continuing cooperation to fine-tune moving target defenses, remote agent technologies, and real-time forensic analysis will ensure these technologies can be implemented across sectors to protect against emerging threats now and into the future.

 

References:

Airdemon. (2010). Airdemon. Stuxnet worm. Retrieved from: http://www.airdemon.net/stuxnet.html.

Associated Press. (2012, February 6). Bigger U.S. role against companies’ cyber threats? Retrieved February 25, 2012, from Shreveport Times: http://www.shreveporttimes.com/article/20120206/NEWS03/120206009/Bigger-U-S-role-against-companies-cyberthreats-?odyssey=tab%7Ctopnews%7Ctext%7CFRONTPAGE

Barker, W. C. (2011). E-Government Security Issues and Measures. In H. Bidgoli, Handbook of Information Security (pp. 97-107). Hoboken: John Wiley & Sons.

Casey, E. (2011). Handbook of digital forensics and investigation. Burlington: Academic Press.

Chabrow, E. Government Information Security, (2012). Intelligent defense against intruders. Retrieved from Information Security Media Group, Corp. Website: http://www.govinfosecurity.com/interviews/intelligent-defense-against-intruders-i-1565

Corey, V., Peterman, C., Shearin, S., Greenberg, M. S., & Van Bokkelen, J. (2002). Network forensics analysis. Internet Computing, IEEE6(6), 60-66.

Grec, S. (2012, May 23). Is moving-target defense a security game changer?. Retrieved from https://www.novainfosec.com/2012/05/23/is-moving-target-defense-a-security-game-changer/

JumpSoft. (2013). Cyber moving target defense. Retrieved from http://www.jumpsoft.net/solutions/moving-target-defense/

Levin, D. (2003, April). Lessons learned in using live red teams in IA experiments. In DARPA Information Survivability Conference and Exposition, 2003. Proceedings (Vol. 1, pp. 110-119). IEEE.

NITRD. (2013). Moving target. Retrieved from http://cybersecurity.nitrd.gov/page/moving-target

Stolfo, S. J., Creamer, G., & Hershkop, S. (2006, May). A temporal based forensic analysis of electronic communication. In Proceedings of the 2006 international conference on Digital government research (pp. 23-24). Digital Government Society of North America.

Tripathi, A., Ahmed, T., Pathak, S., Carney, M., & Dokas, P. (2002). Paradigms for mobile agent based active monitoring of network systems. In Network Operations and Management Symposium, 2002. NOMS 2002. 2002 IEEE/IFIP (pp. 65-78). IEEE.

TV-Novasti. (2012, January 20). FBI Website Crippled by Anonymous. Retrieved February 14, 2012, from rt.com: http://rt.com/usa/news/crippled-fbi-megaupload-anonymous-239/

UMUC. (2012). Module 7: The future of cybersecurity technology and policy. Retrieved from the online classroom https://tychousa.umuc.edu

U.S. Securities and Exchange Commission. (2011). 2010 Annual FISMA Executive Summary Report. Washington D.C.: U.S. Securities and Exchange Commission.

Wu, T. (2011, February 4). Drop the Case Against Assange. Retrieved February 27, 2012, from Foreign Policy: http://www.foreignpolicy.com/articles/2011/02/04/drop_the_case_against_assange?page=0,0

 

, , , ,

5 Comments

Requirements for Business Contingency and Continuity Plans

Requirements for Business Contingency and Continuity Plans

By: Amy Wees

CSEC650, 9045

April 21, 2013

 

Abstract: Technology plays a vital role in business and threats to technology are constantly evolving.  Businesses must be ready to react to a multitude of situations from a computer virus to a hurricane.  The only way to react successfully is to have a well-written, well-tested contingency and continuity plan.  The steps to planning include identifying threats through Business Impact Analysis (BIA), planning for mitigation of risks or reduction of impact to the business through contingency plan development, and setting up recovery options such as backup sites.  Finally, the plan must remain actionable and up-to-date, and the best way to ensure this is through training personnel and testing the plan on a regular basis.

Requirements for Business Contingency and Continuity Plans

On 17 April, 2013 a giant explosion ripped through the small town of West, Texas after the West Fertilizer Company plant caught fire.  The cause of the fire is still unknown, but many people were killed in an attempt to extinguish the massive blaze, air traffic over the area was halted due to the dangerous chemicals released, and miles of structures surrounding the plant were damaged and evacuated (Eilperin & Fears, 2013).  Many are probably wondering how this happened and if the explosion could have been prevented.  The Environmental Protection Agency (EPA) reported that the fertilizer plant was fined in 2006 for a lacking risk management plan that failed to address safety hazards, employee training, and maintenance procedures.  Furthermore, the owner does not know how he will recover from this disaster (Eilperin & Fears, 2013).  Even if West Fertilizer has insurance to cover the damage of the building and company assets, the costs during the disaster recovery could be far more than West can afford.  Insurance may not cover the medical expenses and deaths of the citizens harmed from the explosion.  How will displaced employees be paid?  Will there be law suits?  Did pertinent company data needed to continue operations or file damage claims get lost in the fire?

Although the fire may not have been preventable, a contingency and continuity plan would help West Fertilizer Company pick up the pieces and continue operations.  West Fertilizer is not alone in their lack of business continuity and disaster recovery planning.  A survey conducted by OpenSky Research in 2006 showed that almost half the businesses in America had no business continuity plan in place.  Of the companies that did have plans, the survey reported that the greatest motivation was the reputation of the business and customer satisfaction, followed by compliance with regulations and past experiences with operational hiccups.  Businesses reported that network operations, malware and data corruption were considered highly threatening along with natural disasters such as fires and blackouts.  Businesses without a plan reported budgetary and resource constraints as primary factors (On Windows, 2006).

It is obvious businesses should be concerned with contingency and continuity planning as it is only a matter of when, not if, something happens that can shut the business down.  Today more than ever, businesses are dependent on technology such as computers, networks, mobile devices and the Internet to run their businesses.  Protecting these assets from cyber security threats and service disruptions is paramount to the bottom line and customer satisfaction.  However, in order to convince management that business continuity planning is a worthwhile investment management must understand the return on their investment and design a plan that weighs the benefits of implementing cyber security, maintenance, and safety protocols against the costs of installing these protocols.  The argument for a plan must help management see a Return on Investment (ROI) so that forecasted returns on money spent can be estimated.  In calculating a ROI, the purchase of the proposed solutions, the cost of employee training, and the cost of paying the staff who will manage the solutions should be included.  This calculation will account for the Total Cost of Ownership (TCO) for the investment.  If costs are not projected accurately, management may reject the proposal or restrict the budget (UMUC, 2011).

This paper will cover the steps to identifying threats and risks to a business, creating and maintaining business contingency and continuity plans, options for recovery of data and business operations, and recommendations to put the plan into practice by conducting business continuity testing for a twenty-four month testing cycle.

Developing Business Contingency Plans

According to the National Institute of Standards and Technology’s (NIST) contingency planning guide for federal information systems, there are seven key steps to developing a plan: 1) Construct the contingency planning policy; 2) Complete a business impact analysis (BIA); 3) Pinpoint preventive measures; 4) Produce contingency approaches; 5) Create an information system contingency plan; 6) Conduct testing, training, and exercises; and 7) Ensure the plan is maintained (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).   Although these steps are written specifically for federal systems, they can be used by any businesses as an overall framework to develop a contingency and continuity plan.  For the purpose of this paper, the seven steps are simplified to three broader areas: 1) Identify threats to the business; 2) Create a plan to alleviate or lessen the impact of the threats; 3) Train personnel and test the plan to ensure accuracy (Cerullo, V., & Cerullo, M. J., 2004).  Authors should keep in mind during plan development that all steps should be documented, actionable, and most importantly, kept up to date (Balaouras, 2009).

Identify Threats to the Business

The first aspects a company must consider when creating contingency and continuity plans are the potential threats to the business.  Some threats will be different depending on the type of business.  For example, an Internet based company may be more concerned with cyber threats such as malware and viruses than a small retail store with little to no web presence.  The retail store, on the other hand, may be more concerned with protecting databases containing customer credit card information.  A defense contractor may see a competitor accessing their intellectual property as the largest threat to the business.  There are also threats that impact every business such as natural disasters, electrical outages, and fires which must be taken into consideration.

Business Impact Analysis

No business is exempt from harm or disruption, however, threats may not always be easy to quantify or identify.  For this reason, a Business Impact Analysis (BIA) can assist in identifying the primary areas affected by a disaster or contingency.  A BIA will distinguish the services and functions most critical to the business’ bottom line, and classify those services and functions according to their effect on the business, level of risk, and likelihood of occurrence.  A recommendation is made on whether to avoid, mitigate, or absorb the risk and methods in which to do so.  Management may also choose to delve further into the identified risks by conducting risk assessments (Cerullo, V., & Cerullo, M. J., 2004).

The first step when conducting a BIA is to identify the primary business processes and supporting systems and the criticality of recovering the associated processes/systems.  The impacts of a system outage are determined to include projected downtime, indicating the maximum downtime that can be tolerated whilst allowing the business to maintain operations.  Possible work-around options should also be listed.  Management and process owners should work together to create a comprehensive list of processes, process descriptions, and systems directly related to these processes (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).

The next step in the BIA is to identify resources required to continue primary processes and any interrelated or dependent systems/assets. Considerations for a thorough resource listing are facilities, staff, hardware, software, electronic files, system elements, and critical records (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).  Some companies may have a configuration manager or other information systems manager that maintains this information.  The constant changes and updates in technology make updating this list on a regular basis relevant.  An example table of a listing of assets follows:

Table 1: Company ABC Critical Resources

System Platform/Version Primary User Critical Process Dependencies
Exchange Server Windows Server 2008 All users internal and external Ensures mail  sent/received Domain Controllers, Active Directory Servers

 

The final step in BIA is to set priorities for recovery of various systems linked to critical processes identified in step one.  Systems should be recovered in the order of criticality to the business and alternate available options (Swanson, Bowen, Phillips, Gallup & Lynes, 2010).    For example, if the previously mentioned small retail business loses its point of sale (POS) system, cashiers may be able to add up the cost of various items and collect cash from customers for a short period of time, but there will be a maximum amount of time before the business starts to lose customers.  Therefore, the POS may be the most critical asset to recover on their list.  Secondary to the POS may be the inventory system.  Many retailers depend on an automated inventory system to track incoming deliveries, sales, and order new supplies as well as pay suppliers for items received.  These systems are immensely complex and keeping track of inventory on paper and later having to update the recovered system could be costly in man-hours and mistakes.  Third for the retailer may be the store security system.  Although employees could be posted at the door to check receipts against purchases, the amount of theft may increase, and the store could lose valuable evidence related to a crime or incident that occurs.

Create a plan to alleviate or lessen the impact of the threats

Now that the BIA is complete, the business can work on a plan to mitigate the identified risks.  According to Swanson, Bowen, Phillips, Gallup and Lynes (2010), there are three phases to a contingency plan to include supporting documentation such as the BIA, personnel contact information, write-ups of procedures.  The three phases are: Activation and Notification; Recovery; and Reconstitution.

Activation and Notification Phase

When a contingency or event occurs that affects a crucial business process the first step is to put the plan into action and notify personnel responsible for and affected by actions.  This means the plan must identify primary and alternate team members’ roles and responsibilities.  Procedures should include instructions for notifying staff and customers to include contact information and primary duties of personnel internal and external to the organization, locations of alternate work sites, and checklists to follow in order to complete alternate processes while primary means are restored (Cerullo, V., & Cerullo, M. J., 2004).  Procedures should be easy to follow and not overly complicated.

Recovery Phase

After personnel are deployed and active in alternate processes to keep the business afloat, it is time to start recovery of assets affected by the contingency, in the order of priority previously identified during the BIA.  The recovery phase will take up the greatest portion of the contingency plan as there are many options to consider, and the costs are high.  At a minimum, system back-ups should be created and stored at an off-site location or in a cloud environment on a regular basis to minimize system recovery time and allow for reconstitution from another location.  Procedures for system back-up and recovery should be included in the business continuity plan (UMUC, 2011).  The entire environment should be in the backup, to include software, executables, databases, training information, and all systems needed to run the operation as the ability to get back to business is dependent on the quality of the backups (Barry, 2012).

According to a 2002 report by the Disaster Recovery Institute International, costs of downtime were from three to seven percent of the information systems budget.  Some examples of costs of downtime for company website cited by Cerullo and Cerullo (2004) were $8,000 an hour for leading Internet players, $1,400 per minute on average, and a medium-sized business downtime cost of $78,000 per hour on average with an annual cost of over $1 million due to downtime.  Although these costs were estimated for businesses which depend heavily on the Internet, it is pertinent for any business to consider the cost of downtime when looking at options for timely recovery of assets.

Recovery Options

There are three options for recovery sites: hot, cold, or warm.  Businesses should consult with service providers and software vendors when making a decision about what type of site to use, or whether to outsource this service.  A hot site allows for immediate recovery as it should contain all hardware and necessary for operations and can be loaded with current operational and back-up data (Barry, 2012).  The hot site can also serve as the location to store off-site back-ups.  The greatest consideration for a hot site is the considerable cost of creating and maintaining such a site.  A business should consider a hot site when the cost of the loss of systems is greater than the cost of the site (i.e. there is a ROI) and other site options such as cold or warm do not meet the need.

A cold site provides only a facility to operate from without the hardware infrastructure of a hot site.  While the cost of a cold site is lower, hardware will need to be acquired along with backups to return to regular operations.  Even with robust planning and well trained personnel, a cold site could take weeks or longer for recovery.

A warm site is the happy medium between hot and cold.  Warm sites contain some hardware and can contain backup and recovery data, depending on the setup.  Unlike a hot site, warm sites do not have the latest configurations loaded and will require a shorter workload for recovery compared to a cold site.  Outsourcing is also an option as there are multiple companies that offer a wide range of services (Barry, 2012).  Anytime outsourcing is considered, Service Level Agreements (SLA) should be made, adhered to, and updated on a regular basis to cover the changing requirements of the business and the responsibilities of the service provider.  As previously mentioned, the quantity and type of systems needing backups, the location of backups, and the steps to recovery depending on the contingency should be thoroughly documented in the contingency plan.

Reconstitution Phase

During the reconstitution phase, the system should be validated to determine necessary capability and functionality so the business can return to normal operations.  If the original facility is beyond repair, the reconstitution activities can also be helpful in testing and prepping a new location for future use.  At this point, deactivation of the plan can occur, and lessons learned can be documented as well as updates to the plan.

Contingency Testing

The final step in contingency planning is to train personnel to carry out the plan and test the plan for accuracy.  Perhaps the toughest part of contingency planning is not only creating an actionable plan, but finding time during normal operations to test it.  This is where the buy in of management is so critical.  If management does not push the importance of testing, employees will not feel they are stakeholders in the plan or that it is worth their time to test or train for.       There are several options for training personnel and testing the plan from hosting plan reviews or table top exercises all the way to complete backup and recovery testing cycles, or a combination thereof.  Individual checklists included in the contingency plan could also be given out to key personnel or individual work centers to run through during duty hours and check for accuracy and updates.  It can be difficult for system administrators to test system checklists as live systems are critical to operations and cannot be taken down for such purposes.  This is where virtual machines can be helpful in that copies of servers can be created from virtual templates using very little system resources allowing for testing and training on systems and having no effect on current operations.

Costs for training personnel and testing the plan should be considered and included in the contingency planning and continuity of operations budget.  Potential costs include training and testing man-hours not billable to direct operating costs, purchases of additional technology (such as virtual machines and servers) utilized for testing, and cost of other additional resources necessary for testing and training such as office supplies, use of external facilities, or outsourced vendor training.

24-Month Cycle Business Continuity Testing Plan

Below is a sample testing plan based on a 24 month cycle.

Months 1-2: Plan Accuracy 

Plan appendixes are distributed to key personnel in work centers where they will run through their checklists and action items and check for accuracy.  Key personnel will train alternates on procedures.  Alternates will run checklists to ensure they are repeatable.

Months 3-4: Notification Procedures

Management will choose a table top scenario based on the probability of various threats identified in the BIA.  Work centers will practice notification procedures by running through call lists based on the scenario.  On duty and off duty emergency contact information will be tested and updated as necessary.      

Months 5-6: Activation Procedures

Management will choose another scenario based on the BIA and make note of systems affected.  Key personnel will be notified to test their activation procedures based on that scenario.  Operations personnel will conduct business processes using alternate procedures, systems administrators will recover backups to alternate hardware (or virtual machines) and operations personnel will attempt processes on recovered systems.  This practice will identify lacking procedures in the checklists and data that may not have been backed up or recoverable as well as necessary system configurations after recovery.  Checklists and procedures will be updated based on this exercise.

Months 7-8: Reconstitution Testing

Reconstitution is the process of ensuring that a system is fully operational and configured for use.  In order to validate a system, users must identify the data needed on the system and procedures for working with that data.  This is not covered in the BIA but should be covered in a continuity book for the duty position.  Continuity books are created to ensure that someone with limited knowledge of a position can perform basic tasks when key personnel are not available.  During this testing phase, personnel will be given an alternate duty position for a specified period of time and attempt to perform routine tasks using the continuity book as their guide.  Often in an emergency situation, the person who knows an essential business process best may not be available and it will be paramount for other personnel to be able to fill-in where necessary.

Months 9-10: Updating Continuity Procedures

Based on the last test of continuity books, personnel will utilize months 9-10 to update their continuity documentation and prepare for a disaster preparedness drill in months 11-12.

Months 11-12: Contingency Recovery Drill

In this test, all phases will be tested.  Management will choose a scenario from the BIA that would require a move to an alternate facility which is a hot site and ultimately, employees will reconstitute operations at the new site.  First, notification procedures will be tested; employees will be prepared for this ahead of time to let them know this is a test of the system.  External agencies and customers will also be notified ahead of time that the agency is running this test so as not to affect operations.  Employees will start their checklists using alternate procedures for regular operations depending on the scenario until information technology (IT) personnel notify them to move to the hot site.  Employees will then move to the hot site and continue operations, identify shortfalls, and update the plan based on the lessons learned during this testing.  This type of drill is not recommended for businesses without a hot site as there would be too much risk to operations.  However, a table top contingency drill could be similar to this to test employees’ awareness of what to do in various scenarios would be helpful.

Months 13-24: Repeat months 1-12

During the second year, the business will repeat the testing done in the first year and adjust timelines and procedures as necessary to fine-tune the process.  Different scenarios can be given, or the same scenarios if management feels employees need more practice.  Repetition allows employees to gain confidence in plan execution and creates a mindset of contingency planning as part of day-to-day operations.

Conclusion

Technology plays a vital role in business and threats to technology are constantly evolving.  Businesses must be ready to react to a multitude of situations from a computer virus to a hurricane.  The only way to react successfully is to have a well-written, well-tested contingency and continuity plan.  The steps to planning include identifying threats through BIA, planning for mitigation of risks or reduction of impact to the business through contingency plan development, and setting up recovery options such as backup sites.  Finally, the plan must remain actionable and up-to-date, and the best way to ensure this is through training personnel and testing the plan on a regular basis.

 

 

 

References:

Baker, N. (2012). Enterprisewide Business Continuity. (Cover story). Internal Auditor, 69(3), 36-40.

Barry, C. (2012). Backup plans. Multichannel Merchant8(5), 36-38.

Balaouras, S. (2009). Businesses take BC planning more seriously. (2009). For Security & Risk Professionals.

Cerullo, V., & Cerullo, M. J. (2004). Business continuity planning: a comprehensive approach. Information Systems Management21(3), 70-78.

Eilperin, J., & Fears, D. (2013, April 18). Fertilizer facility explosion injures at least 160 in central Texas; 5 to 15 feared dead. The Washington Post. Retrieved from http://www.washingtonpost.com/world/national-security/fertilizer-plant-explosion-leaves-more-than-100-wounded-in-central-texas/2013/04/18/14fa7cb2-a7ef-11e2-a8e2-5b98cb59187f_story_2.html

Geer, D. (2012). Are You Really Ready for Disaster? Three exercises for testing your business continuity plans. CSO Magazine11(8), 16-18.

Karim, A. (2011). Business Disaster Preparedness: An Empirical Study for measuring the Factors of Business Continuity to face Business Disaster. International Journal of Business & Social Science2(18), 183-192.

Kirvan, P. (2009, July). Using a business impact analysis (BIA) template: A free BIA template and guide. TechTarget: SearchDisasterRecovery. Retrieved November 4, 2011, from http://searchdisasterrecovery.techtarget.com/feature/Using-a-business-impact-analysis-BIA-template-A-free-BIA-template-and-guide.

Lam, W. (2002). Ensuring business continuity. IT professional4(3), 19-25.

On Windows. (2006, March 23). Half of us businesses lack continuity plan. On Windows Magazine, Retrieved from http://www.onwindows.com/Articles/Half-of-US-businesses-lack-continuity-plan/2063/Default.aspx

Rawlings, P. (2013). SEC’s Aguilar Pushes Continuity Plan Testing. Compliance Reporter, 25.

Rucks, A., Ginter, P., Duncan, W., & Lesinger, C. (2011). A Continuity of Operations Planning Template: Translating Public Policy into an Effective Plan. Journal of Homeland Security and Emergency Management8(1).

Slater, D. (2012, December 13). Business continuity and disaster recovery planning: The basics. Retrieved from http://www.csoonline.com/article/204450/business-continuity-and-disaster-recovery-planning-the-basics?page=1

Swanson, M., Bowen, P., Phillips, A., Gallup, D., & Lynes, D. (2010, November 11). Retrieved from website: http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf

Totty, P. (2009). Business Continuity: Test and Verify. Credit Union Magazine75(12), 46.

UMUC. (2011). Module 11: Service Restoration and Business Continuity.  Retrieved from http://tychousa.umuc.edu/

Whitworth, P. M. (2006). Continuity of Operations Plans: Maintaining Essential Agency Functions When Disaster Strikes. Journal of Park & Recreation Administration24(4), 40-63.

Wold, G. H. (2006). Disaster recovery planning process. Disaster Recovery Journal5(1).

, ,

2 Comments

Digital Forensics Investigations: Data Sources and Events based Analysis

 

Digital Forensics Investigations: Data Sources and Events based Analysis

Amy Wees

CSEC650, 9045

March 15, 2013

Abstract

Data sources used to gain evidence in digital forensics investigations differ significantly depending on the case.  This paper prioritizes data sources used to gain evidence for network intrusions, malware installations, and insider file deletions.  These three events drive the prioritization of the types of data that are analyzed, what information is desired, and the usefulness of that data in regards to the event.  Primary focus is information garnered from sources such as user account audits, live data systems, Intrusion Detection Systems, Internet Service Provider’s records, virtual machines, hard drives and network drives.

Digital Forensics Investigations: Data Sources and Events based Analysis

Introduction

Digital forensics investigations deal with a multitude of data sources used to preserve and capture evidence to be used in a legal platform.  The various events or crime scenes investigators encounter drive the prioritization of the types of data that are analyzed, what information is desired, and the usefulness of that data in regards to the event.  The goal of this paper is discuss three specific events; network intrusion, malware installation, and insider file deletion and then analyze and prioritize data sources that can be useful in investigating each case.

Network Intrusion

A network intrusion occurs when a computer network is accessed by an unauthorized party.  Network intrusions can have significant impacts on the victim organization as files can be stolen, altered or deleted, and hardware or software can be damaged or destroyed.  In a case study by Casey (2005) published in the Digital Investigation Journal, a network intrusion investigation is described.  The scenario presented in this case study will be the basis for analyzing data sources most crucial to the contribution of evidence.

In March of 2000 at a medical research facility, a system administrator completing routine maintenance tasks noticed an unfamiliar account with the name “omnipotent” on a server which he was solely responsible for.  The administrator immediately deleted the account and notified Information Security personnel (Casey, 2005).  The incident caused several laboratories to be shut down for days, preventing ongoing medical research resulting in severe financial losses for the company.  Luckily, after a thorough investigation the perpetrator was caught and charged in 2004 (Casey, 2005).

Prioritized data sources

Account Auditing

Many steps were taken by forensic investigators in this scenario to preserve evidence, reconstruct the crime, track the intruder and examine the data.  The aim of this paper is to prioritize the data sources used in this scenario from the most to least useful in terms of a network intrusion.  The first data source used was review of user accounts during routine maintenance.  Without routine review of user accounts and permissions, the account the intruder was using to access the server may never have been discovered (Casey, 2005).  This scenario exemplifies why account and role auditing is so vital.

The Federal Agency information technology (IT) Handbook on technical controls published by the National Institute of Standards and Technology (NIST) (2002) recommends access to any asset be controlled by combining technical and administrative controls to make sure that only approved users are given an applicable level of access and that they can be held accountable for their use of information systems.  Access should be monitored by positively identifying and authenticating users.  The handbook also makes clear that a weakness in policy at one node in the network can put other nodes at risk.  Therefore, it is essential that all agencies have a uniform access control policy (NIST, 2002).  Account and role maintenance should require users to authenticate with a strong password and to change passwords on a consistent basis.  Administrators must also ensure user names belong to currently identified users and that users are deleted and updated often.  Auditing should also be established on user accounts to ensure that accounts which haven’t been active for a period of time are deleted.  Incorrect logon attempts should also be limited and locked after multiple erroneous password entries (NIST, 2002).

If some of these policies were in place in the above scenario, the “omnipotent” user may have been discovered earlier or prevented from accessing the system in the first place. Role auditing can be challenging when dealing with multiple types of operating systems and role based user accounts.  Each operating system has different account systems and auditing procedures.  It may also be difficult for organizations with a large pool of users to weed out inactive or incorrect accounts.  Organizations may also outsource their IT services, making user role auditing a difficult process as the administrator behind the phone may not be able to positively identify a user or give the correct permissions to an account.  User account and role auditing is the most useful of the four data sources in this network intrusion scenario as it is easy to accomplish and if maintained properly can aid an administrator in identifying if an account has been misused, locked out, or doesn’t belong.

Live System Data

Next, to gain more evidence, investigators used the Encase program to capture live data from the systems and kept an audit log by utilizing the script command.  During this capture, they found that the intruder was accessing the network through a dial-up connection in Texas, had installed a sniffer, and also substituted the original Telnet for a version with a backdoor vulnerability allowing remote access.  The sniffer log contained records of the backdoor intrusions as well as root passwords for multiple computers.  They also discovered the hacker had created his own telnet password, “open_sesame” to access and compromise additional computers on the network (Casey, 2005).

The primary reason an incident handler would use a tool such as Encase to capture live data is to determine if an event has occurred and if a full investigation needs to be accomplished on a system.  In the above scenario, the incident handlers used Encase to capture live, volatile data and were able to determine if a network connection had been made from the intruder to that computer.  They were also able to view live system logs to see what passwords had been used to access the system.  Capturing data from live systems is also called “live forensics”.  Live forensics capture system information or volatile data that disappears after the device is powered down.  The challenges of live forensics are in preserving the state of the system and ensuring the data captured is forensically sound (McDougal, 2006).  The best way to do this is by using a forensic toolkit such as Encase which keep the process as automated as possible.   In the above scenario, a new employee who wasn’t competent in live forensic processes missed files on several machines and didn’t keep an audit log which allowed incident handlers to determine which data came from which computer, leaving all evidence captured to be inadmissible in the case.  Live system data is the second most useful data source because live data provides the most promising evidence to discover what files and systems have been compromised and gives evidence in real time as to how the intruder is accessing the system (Casey, 2005).

Intrusion Detection System

The third data source used was the intrusion detection system.  After the subnet used by the intruder in Texas was discovered from the live system analysis, investigators were able to reconfigure the intrusion detection system to monitor network traffic for these connections.  After the reconfiguration, investigators were able to monitor network traffic and watch as the hacker accessed more machines not previously found to be targeted using the telnet backdoor password.  Critical systems were able to be secured and processed for evidence as a result of these findings (Casey, 2005).

Intrusion Detection Systems (IDS) are indispensable in detecting network intrusions because they can be programmed to automatically alert administrators when abnormal network traffic occurs.  Hill & O’Boyle (2000) compare IDS to a burglar alarm as although one observes cyberspace and the other physical space–both provide alerts when the unforeseen occurs. The main difference is that in cyberspace unauthorized actions are harder to detect than those in a physical space.  The challenge for an IDS operator is sorting out the harmless activity from the anomalous and programming the IDS to capture future anomalies (Hill & O’Boyle, 2000).

Automated IDS and accompanying forensic procedures utilize “signature matching,” which searches network connections and activity alerting on specific incident patterns and means of attack. Unfortunately, automatic signature matching is not a definite process and is determinant on many factors.  Signatures often create false alarms because they are too generalized, such as alarms for port scanning.  Attack profiles can also vary considerably from well-known malware insertion attempts to customized programs created to target specific systems and not yet known to the public. The latter customized attacks are not caught by IDS because signatures do not yet exist for this situation and are not made available until after the attack.  The downside of utilizing IDS in this situation is that, without the latest updates, IDS is not nearly as effective (Hill & O’Boyle, 2000).

Forensics investigators can start by sorting through automatically generated IDS alarms and then use clues from the alarms to further analyze less developed system logs and information. In order to isolate evidence of an intrusion, the investigator needs vast knowledge of various operating systems and hacking techniques and must comprehend logs of diagnostic tools and systems. IDS is the third most useful source of evidence because after discovering the intrusion, the IDS is able to capture details about the rogue connection and help prevent or block further connections as well as pinpoint where the traffic is aimed.

Internet Service Provider Records

The last source of evidence was the Internet Service Provider (ISP) used by the hacker.  Investigators were able to call the ISP and request logs and records connected with the case be preserved (Casey, 2005).  According to Daniel (2012), after a subpoena if one is requested, there is some basic information that can be gleaned from ISP records, dependent on what the ISP collects on its account holders.  Some of the available information could be names, e-mail addresses and mailing addresses for paid account holders, to include payment information such as credit cards or bank account information, which may lead to other evidence.  IP addresses assigned to the account during the dates and times requested as well as associated activity may be available along with a MAC address for the computer making the connection (Daniel, 2012).  The challenge of collecting information from an ISP is that a subpoena may be required, information may not always be reliable, and different ISPs carry varying amounts of information regarding their customers.  For these reasons, ISP records are the least useful data source in this scenario.

Malware Installation

Malware is malicious software that can emerge from scripts or codes hidden in websites or content, embedded in web advertisements or buried in different types of software programs.  Malware can infect a system when a user visits a website, opens an email, or clicks on a hyperlink among many other normal activities.  The types of malware that exist are viruses, rootkits, spyware, and worms.  Each type infects a system in a different way (Goodrich, 2012).  Malware is dangerous because it exists in a multitude of formats, is easy to create and hard to track.  Although there are many different anti-virus, anti-spyware and anti-malware applications to detect and remove malware from a system, the programs are only as effective as the updates provided to recognize the attack.

One common scenario was presented by Martin Overton at the 2008 Virus Bulletin Conference: a user calls the helpdesk with a complaint that their computer is suddenly unusually slow to respond, and they aren’t able to bring up task manager to figure out what the problem might be.  How does the helpdesk know if the problems are caused by malware or if the user has done something wrong?  Alas, the anti-virus program shows no signs of an infection, is recently updated, and has been active throughout the reported timeframe.  What should the administrator do?  How can the machine be investigated further to determine the presence of malware (Overton, 2008)? Overton presents an all too familiar scenario which will be used as a basis for analysis of a malware installation.

Prioritized data sources

Live System Data

Similar to the network intrusion scenario and most useful to the malware installation scenario is the collection of live system data.  Overton (2008) recommends that after a suspect system is identified, all traffic coming to and leaving the system should be captured to include searching for hidden files inserted by malware, most likely located in alternate data streams.  Nmap, Nessus, and various other vulnerability assessment tools can be used on the suspect workstation as well as the network to analyze anomalies (Overton, 2008).   Programs such as Helix3 and Windows Forensics Tool-chest can examine volatile system data for valuable clues such as network routing tables, system drivers and applications, and analysis of running processes and services, all without alerting the attacker an investigation is taking place (Aquilina, Malin & Casey, 2010).  The challenge in determining if malware is installed on a live system is that tools may not be available to conduct a thorough analysis and anti-malware tools may give a high amount of false-positives or the malware may be so stealthy it goes unnoticed until the damage caused is irreparable.

Intrusion Detection System

An IDS is the second most useful tool for malware installation.  After the initial investigation is complete and the analyst has deemed an infection is probable but wasn’t caught by the anti-malware program; the workstation should be removed from the network to prevent the spread of malware to other systems and the ports and protocols collected should be analyzed further using IDS or other network analysis tools such as Wireshark or Snort (Overton, 2008).  The second step in discovering malware is analysis and IDS can assist in malware detection by creation of signatures created based on the information captured from the previous inspection.  These signatures can then be implemented to block future attacks until anti-virus programs are updated.  There are many reasons IDS can be used to both detect and prevent malware that comes in through the network boundary.  In a second conference presentation written by Overton (2005), he explains that malware is quickly evolving and requires faster detection methods.  IDS can also be part of a defense-in-depth strategy that uses IDS in combination with anti-malware scanning tools to provide improved protection.  Finally, using an IDS for malware detection uses the IP from the source and this data can quickly eliminate the spread of threats across the network (Overton, 2005).  The challenges with an IDS is that signatures can be difficult to create and maintain, they require training to understand and utilize for malware detection, and the amount of information left for an investigator to comb through may be overwhelming.

Virtual Machine

            The third most useful data source for malware installation and analysis is the use of a virtual machine.  As suggested by Overton (2008), a private or closed network or lab environment should be used to analyze malware if available.  Virtual machines can make this possible by allowing for multiple systems to be running from the same hardware allowing the observer to watch how a string of malware behaves inside various systems, also called “behavioral malware analysis” (Zeltser, 2007).  Virtual machines can take on the forms of many different types of systems or platforms, without requiring an entire lab of expensive equipment.  Virtual machine program vendors such as VMWare allow the administrator to take multiple snapshots of the systems settings, performance and volatile data throughout the observation process so that if further study is needed it is possible to go back to a previous snapshot.  VMWare can also create a simulated network so that it is not necessary to connect the infected machine to a live network allowing for analysis in a protected environment while still having the ability to analyze network traffic (Zeltser, 2007).  In a virtual environment, threats can be detected and mitigations tested and proven.

The use of virtual machines (VM) also presents challenges in that a virtual environment cannot always fake the characteristics of an operating system on a physical platform, allowing attackers to possibly discover the VM.  In certain cases, a virtual environment may not meet the need because of the type of system being imitated or the response of the malware, requiring the analyst to utilize a complicated and expensive laboratory environment (Brand, Valli & Woodward, 2010).

Insider File Deletion

Inside threats to an organization can come from employees, contractors, vendors, visitors, and   anyone else with reasonable access to company assets.  What makes insiders threatening is their familiarity with systems, databases, and processes as well as their permitted position inside of security barriers (Cappelli, Keeney, Kowalski, Moore & Randazzo, 2005).  Accidentally or not, files that are crucial to an organization can be deleted and information security personnel need to have the skills and tools to be able to recover data for various reasons.

In a scenario from March 2002 offered by Cappelli, Keeney, Kowalski, Moore & Randazzo (2005), a resentful employee of a finance company planted a logic bomb that erased 10 billion files prior to quitting over an annual bonus disagreement.  A logic bomb can be inserted into a computer system and set to activate at a later time or upon a specified action.  The deleted files in this case impacted servers across the country and cost over $3 million dollars in damages and file reconstruction.  Were the company able to recover the deleted files, what data sources would be useful to the investigation?  This query will be the basis for this analysis.

Prioritized data sources

Hard Drive (Non-volatile system data)

            In the previous subjects of network intrusion and malware installation, live system data has been the highest priority data source because of the indications given by volatile data.  However in the case of insider file deletion; the first goal is to make a forensic copy of the hard drive in an attempt to recover data that hasn’t been overwritten on the hard drive.  Even the least savvy computer user will know to delete files from the recycle bin so volatile data is not as much of a concern as the non-volatile data that resides in the master file table which can be recovered with the assistance of various third-party applications.

For example, when a file is removed from the recycle bin in Windows, only the file information such as the path, sector, and additional identifying information such as create and modify dates have been erased.  Windows is simply notified by the file system that new space is available for use where the deleted file used to be and any newly saved files will overwrite information deleted long ago.  In spite of this if a newly saved file is not as large as or does not take up all of the space of a previously deleted file (hence the old information is not completely overwritten), the file is still recoverable with the use of forensic software.  If only a short time has passed since the file deletion, tools such as WinUndelete for windows can easily recover the file (Landry & Nabity, n.d.).  The challenges of recovery from a computer hard drive are that after a period of time, the desired files may be entirely overwritten.  A smart criminal may also use freeware such as “Eraser” to overwrite erased data immediately making it unrecoverable by forensic toolkits (Capshaw, 2011).

Network Storage

            Of equal worth to a forensic investigation in the insider file deletion scenario, is the recovery of deleted files from a network storage device.  In most cases, files of considerable importance to an organization must be shared with a group of people and are therefore located on a network storage device such as a Network Attached Storage (NAS), Windows File Server, or a Storage Area Network (SAN).  After a file is deleted from a folder on the network, the easiest way to recover it is through previous versions.  Microsoft TechNet (2005) explains that on most Windows Server versions, there is a previous versions tab that when selected, will bring up any files that have been deleted from that location and can then be copied and pasted to the newly desired location.  Similarly, NAS and SAN file systems offer recovery of recent snapshot from the administrative user interfaces.  The challenge with recovering files from a network storage device is that a copy of a RAID or other file system disk may be large and difficult to analyze.  Insiders with administrative access may also know how to permanently delete files or destroy network storage volumes.

Conclusion

Digital forensics investigations deal with a multitude of data sources.  This paper has covered three events which drive the prioritization of the types of data that are analyzed, what information is desired, and the usefulness of that data in regards to the event.  Important data sources for network intrusions are account audits, live system data, Intrusion Detection Systems, and Internet Service Provider records.  Malware installation requires examination of live system data, Intrusion Detection Systems and Virtual Machines.  Recovery of deleted files relies mostly on hard drives and non-volatile data.  Each data source has tools, advantages, and challenges for an investigator to consider dependent on the situation at hand.

 

 

References:

Aquilina, J. M., Malin, C. H., & Casey, E. (2010). Malware forensic field guide for windows systems, digital forensics field guides. New York: Syngress. Retrieved from http://www.malwarefieldguide.com/Chapter1.html

Brand, M., Valli, C., & Woodward, A. (2010, November). Malware forensics: Discovery of the intent of deception. Originally published in the proceedings 8th Australian digital forensics conference, Perth, Australia. Retrieved from http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=1074&context=adf

Cappelli, D., Keeney, M., Kowalski, E., Moore, A., & Randazzo, M. (2005). Insider threat study: Illicit cyber activity in the banking and finance sector. (Technical Report, Carnegie Mellon Software Engineering Institute). Retrieved from http://www.dtic.mil/dtic/tr/fulltext/u2/a441249.pdf

Capshaw, J. (2011, April 01). Computer forensics: Why your erased data is at risk. Retrieved from http://www.webmasterview.com/2011/04/computer-forensics-data-risk/

Casey, E. (2005). Case study: Network intrusion investigation e lessons in forensic preparation. Digital Investigation, 2005(2), 254-260. Doi: 10.1016. Retrieved from https://wiki.engr.illinois.edu/download/attachments/203948055/1-s2-1.0-S1742287605000940-main.pdf?version=1&modificationDate=1351890428000

Daniel, L. (2012). Digital Forensics for Legal Professionals. Waltham, MA: Elsevier Inc. Retrieved from http://my.safaribooksonline.com/book/-/9781597496438/22-discovery-of-internet-service-provider-records/223_what_to_expect_from_an_int

Goodrich, R. (2012, Nov 21). What is Malware? How malicious software can affect your computer. Retrieved from http://www.technewsdaily.com/15612-what-is-malware.html

Hill, B., & O’Boyle, T. (2000, August). (2000, August). Cyber Detectives employ Intrusion Detection Systems and Forensics. Retrieved from http://www.mitre.org/news/the_edge/february_01/oboyle.html

Landry, B., & Nabity, P. (n.d.). Recovering deleted and wiped files: A digital forensic comparison of FAT32 and NTFS file systems using evidence eliminator. Retrieved from http://www.academia.edu/1342298/Recovering_Deleted_and_Wiped_Files_A_Digital_Forensic_Comparison_of_FAT32_and_NTFS_File_Systems_using_Evidence_Eliminator

McDougal, M. (2006). Live forensics on a windows system: Using windows forensic toolchest. Retrieved from http://www.foolmoon.net/downloads/Live_Forensics_Using_WFT.pdf

Microsoft. (2005, January 21). Recover a file that was accidentally deleted. Retrieved from http://technet.microsoft.com/en-us/library/cc787329(v=ws.10).aspx

National Institute of Standards and Technology. (2002). Agency IT Security Handbook: Technical controls. In Federal Agency Security Practices (2 Ed.). Retrieved from http://csrc.nist.gov/groups/SMA/fasp/documents/policy_procedure/technical-controls-policy.doc

Overton, M. (2005, May). Anti-malware tools: Intrusion Detection Systems. Paper presented at conference 2005 EICAR Conference, Malta. Retrieved from http://momusings.com/papers/EICAR2005-IDS-Malware-v.1.0.2.pdf

Overton, M. (2008, October). Malware forensics: Detecting the unknown . Conference Presentation Paper 2008 Virus Bulletin Conference, Ottawa, Canada. Retrieved from http://momusings.com/papers/VB2008-Malware-Forensics-1.01.pdf

Zeltser, L. (2007, May 1). Using VMware for malware analysis. Retrieved from http://zeltser.com/vmware-malware-analysis/

 

,

Leave a comment

Trusted Platform Module

 Trusted Platform Module

Team Project by: Philip Roman, Vouthanack Sovann, Kenneth Triplin, David Um, Michael Violante, Amy Wees

CSEC640, 9046

November 25, 2012

 

Introduction

The focus of this paper is to discuss both current issues and recent developments in Trusted Platform Module (TPM) security as well as its strengths and weaknesses.  The main reasoning behind TPM security devices was to establish a means of trusted computing.  These devices utilize unique hardcoded keys to perform software authentication, encryption, and decryption among other things.  This paper will discuss what TPM is comprised of, its strengths and weaknesses, possible vulnerabilities, TPM attestation, and potential uses for TPM.  The intended audience for this paper are those who are technically savvy with an in depth knowledge of security concepts and a general understanding of TPM, encryption and other cybersecurity concepts.

Background

As trusted computing becomes more prevalent and necessary, TPM enhances the security of information systems acting as a trusted entity that can be utilized for securing storage and cryptographic key generation among other capabilities (Aaraj, Raghunathan, & Jha, 2008).  TPM was created by the Trusted Computing Group (TCG), an initiative between some of the most prevalent information technology corporations in the world, to establish a better means of trusted computing.  The utilization of a TPM chip helps to ensure the security of an information system as it allows the implementation of security from a hardware perspective.  Although TPM technologies and the software that utilizes them inherently contain flaws and vulnerabilities, the primary hesitation numerous groups have had with implementing TPM is that it is too inefficient.  Some organizations, especially those that demand extensive end-user monitoring and activity logging, are hesitant to implement TPM as it will limit their ability to scrutinize activity or capture activity logs.

TPM is composed of three basic elements; root of trust for measurement, the root of trust for reporting, and root of trust for storage.  These three elements refer to platform integrity measurements, the storage of these measurements, and the reporting of values stored (Aaraj, Raghunathan, & Jha, 2008).  A secondary use of TPM is the generation of cryptographic keys.  Figure 1 details the make-up of TPM:

Figure 1: Make-Up of TPM (Aaraj, Raghunathan, & Jha, 2008)

 

Benefits and Strengths of TPM

            TPM offers three primary benefits: storage for secure content, secure platform specific criteria reporting, and hardware authentication (Ryan, 2009).  By using a TPM to secure content, the user has the benefit of storing files securely without relying on a software based operating system.  In the instance of mobile devices, users can encrypt entire hard drives using TPM thus reducing the risk of loss of sensitive information.  Authors Van Dijk, Sarmenta, Rhodes, and Devadas (2007) explain how it is possible for many users to connect to an untrusted storage device over an untrusted network and protect the information being shared without reliance on a secured common operating system using TPM 1.2 technology.  For example, many users today rely on secure storage servers hosted online so that they can share data between multiple devices and access information from anywhere.  This requires the user to trust that their hosted information is secured with administrators, server and client OS and additional security software, and the server BIOS and CPU.  Van Dijk et al. (2007) argue that in utilizing a TPM chip in the user’s machine and the online server; even peer-to-peer transactions can be completely secured using one-time certificates.  A one-time certificate uses the TPM to verify the identity of the sender and receiver of the information which can be verified at any time after the transaction has occurred.  The verifier also requires no contact with the issuer and need only rely on the TPM chip in the originating machine. Significantly, one-time certificates cannot be counterfeit or falsified even from a hacked machine which could open up their utilization to multiple offline applications (Van Dijk et al., 2007).

A TPM can also collect, secure, and report information about the state of a computer’s components such as the BIOS, boot records and sectors, applications and OS.  The TPM can do this using platform configuration registers (PCRs) to securely pass information measured by one component about the state of another component.  Upon boot-up, component X measures the status of component Y and inserts the data into a PCR where it is secured and able to provide the status of the platform from that point forward (Ryan, 2009).  In order to pass this information known as platform configuration to another entity, the TPM encrypts the configuration using a secured signature key which can only be decrypted by a remote TPM key with the required authentication information (Sadeghi & Stüble, 2004).

Another benefit of TPM is hardware or platform authentication whilst preserving the privacy of the user through Direct Anonymous Attestation (DAA).  Chen, Brickell, and Camenisch (2004) describe DAA as a secure group signature that cannot be tampered with after it is invoked.  Additionally, each user can choose whether or not their signature can be attributed to another allowing anonymity.  DAA requires four participants; the host or platform and its TPM, the issuer and verifier (Ryan, 2009).  The TPM generates a secret message, receives a signature for it from the issuer and then uses that signature to prove to the verifier that the attestation was received anonymously (Brickell, Camenisch, & Chen, 2004).  DAA also allows for detection of rogue or published keys because a verifier can confirm that a signature has already been used.  Brickell et al. (2004) proved that DAA is secure using the random Oracle model assuming strong RSA and Diffie-Hellman are applied.

Weaknesses of TPM

A known weakness of TPM is the cold boot attack which overcomes the disk encryption thought to protect the contents of a hard drive from physical access.  Halderman et al. (2009) report they were able to overcome the disk encryption of BitLocker, TrueCrypt, and FileVault with cold boot attacks.  The idea behind an encrypted hard drive is that if a laptop is stolen in a locked state and the thief powers down the computer, everything in memory would be erased and encryption keys lost.  Unfortunately, this is not always the case.  Bitlocker is provided by Microsoft for use with TPM and encrypts parts of the disk on demand.  Halderman et al. (2009) created an automated tool called “BitUnlocker” using an external USB hard disk with a specialized driver allowing BitLocker volumes to be remounted on a Linux OS.  The tool runs a key finder and tries each key until one works and after breaking into the system allows the disk volume to be searched using the other OS.  When rebooting a Windows laptop and connecting to the external drive, the tool successfully recovered the encryption keys and allowed the hackers to decrypt the disk in moments (Halderman et al., 2009).  An obvious mitigation technique would be to prevent physical access to the system using a defense-in-depth strategy or employing additional encryption methods on mobile technology.

Social engineering may also be a vulnerability to TPM as most Computer manufacturers such as Dell keep a master password list for each service tag number.  Information Assurance professional Morrison (2010) found he could simply call Dell and provide the service tag number and they could provide the key to unlock his BIOS.  Without requiring personally identifying information or passwords, anyone could call a company and receive the same master password.  With that same phone call Morrison also found out makers of external hard drives also record the chips in the devices they sell so the same master password can be generated.  Morrison (2010) recommends using an aftermarket external hard drive that may be less likely to provide such great customer service to protect sensitive mobile data.

 

TPM Discussion

 

Ideal Application of TPM

 

When one looks at applications that function well using TPM a good example is the Random Oracle Model noted by Gunupudi and Tate (2007). As noted by Gunupudi and Tate (2007, p.1) the Random Oracle Model “is an idealized theoretical model that has been successfully used for designing many cryptographic algorithms and protocols”.  The idea with this model is to prove a cryptographic scheme is secure; where all parties, including the adversary, have access to a random function, called a random oracle, and then replace the random oracle with a “good” cryptographic hash function in the standard model. As a method to prove that the random oracle model is secure, evidence was provided that generated a scheme in the form of a standard model – thereby providing an instantiation of the oracle with a hash function that demonstrates that it is secure.

Another example of an ideal application that supports TPM and the TPM functionality is Cloud Computing. This was championed by Lui et al. (2010) where they proposed an idea, and then formally implemented it as virtual TPMs in a cloud-based architecture. The premise behind this development was to provide a technology with the TPM functionality for applications that did not have the use and capability of the TPM chip as part of their platform.

According to Lui et al. (2010) the TPM functionality in cloud computing was noted to be easily accessible to various applications in diverse languages due to cloud computing’s ability to distribute services to basic protocols. TPM and its hardware chip commonly perform well in most applications.  Trusted Platforms add value to applications and services such as electronic money systems, email, workstation sharing, platform management software, single sign-on, virtual private networks, Web access, and digital content delivery (Pearson, 2005).

 

Law Enforcement Application

When it comes to Law Enforcement and the Trusted Platform Module (TPM), officials see benefits and drawbacks with certain features of TPM.  From a digital forensics point of view, Burmester and Mulholland (2006) note that the advent of trusted computing has strong points.  In fact, the Trusted Computer (TC) enabled features criticized by the naysayers may become a boon for cyber-investigators.  On the other hand, if file-encryption becomes the norm, trusted computing may turn out to be law enforcement’s worst nightmare.

 

TPM Keys

When trying to determine the effects and the types of TPM keys and its processes, it was noted by Liu et al. (2010) that the trust of TPM mainly lies in its capabilities for secure key management (i.e., key generation, storage, and use), and secure storage and reporting of platform configuration measurements.  Each TPM has a unique endorsement key (EK), which is generated by the chip manufacture.  Before using a TPM chip, users need to take the ownership of the chip and create a storage root key (SRK).  Both EK and SRK are RSA key pairs, and protected by storing their private keys always in the TPM chip.

The TPM contains an endorsement private key (EK) that uniquely identifies the TPM (thus, the physical host), and some cryptographic functions that cannot be modified. How this method works is by the agreeing companies endorsing the matching “public key” to certify the acceptability of the “chip and validity of the key” in question (Santos, Gummadi, & Rodrigues, 2009).

 

TPM Authorization Protocols

 

The Object Independent Authorization Protocol (OIAP), according to Ryan (2009) crafts a specific period of time that can affect any object; yet only works with specific directions. To understand how OIAP works, an authorization session which begins with the command TPM_OIAP is executed.  To set up an OIAP session, Ryan (2009) provided the following guidelines, “…the user process sends the order to the TPM simultaneously with a nonce argument.  Nonces that function under the user process are labeled as odd nonces, and nonces under the TPM format are labeled as even nonces.  When the TPM_OIAP obtains the authorization handle, the user process then comes together with the newly received even nonce.  After which, each command within the session sends the “authorization handle” as part of its process, introducing a completely different nonce that is now odd.  The response that is generated from the TPM now comprises of a nonce that is of an even nature.  All authorization Hash Message Authentication Codes (HMACs), contain the latest “odd and even” nonces as part of this process. When looking further into this technology and the OIAP session, the authorization HMACs are keyed on the authdata for the resource (e.g., key) requiring authorization.

 

Another feature in this process is the Object Specific Authorization Protocol (OSAP). This is where a session is ultimately created that controls an exclusive object that is specified when the session is originally set up.  When looking at OSAP and how it processes or works under TPM, Ryan (2009) again describes OSAP.  An OSAP session is created when the TPM receives the TPM_OSAP command with the name of the session key and an odd OSAP nonce. The reply incorporates the authorization handle, and an even nonce for the “rolling” nonces, as well as, an OSAP even nonce.  During this time period as described by Ryan (2009), the user process and the TPM each calculate a “secret hash value”, also known as the “OSAP secret”, which is comprised of the HMAC of the odd and even OSAP nonces within the session’s authdata.  At this point, instructions in the authorization session may be executed.  In an OSAP session, the authorization HMAC is set on the OSAP confidential value.  The rationale of this procedure is to authorize the user process to store the session key for possible prolonged durations during a particular session, without endangering the safety of the authdata on which this whole promise is based upon (Ryan, 2009).

An OSAP phase can also make use of numerous directives, yet these directives must direct a single object identified at the time the session was established.  The benefits of an OSAP session as mentioned by Ryan (2009), is so that it can be utilized for instructions that will present additional authdata to the TPM.  This OSAP phase can also make use of a number of rules, but the rules must handle a distinct object specified at the time the session was initiated.

Attestation Principle

TPM attestation is a function that allows system verification to take place by utilizing a remote party.  Validation is a key role in the design of TPM and supports the overall structure of a secure system.  This process exists to confirm no modifications have taken place that would deviate from a secure standardized process.  Attestation ensures that no deviations have taken place based on a standardized set of parameters called the Platform Configuration Registers (PCR) (Lioy, 2011).  The deviations attestation may detect include unauthorized software or hardware which can have hostile intent on the overall secure computing process.  The process begins with a platform that contains TPM and a verifier, also known as an appraiser.  TPM within the specified platform requires authentication, to confirm that the components within the system have not been modified, thus creating a trustworthy source.  The TPM possesses an endorsement key (EK) that is delivered to the verifier, which allows the verifier to authenticate the PCR (Segall, 2011).  If the verifier agrees that the platform’s configuration is secure, the verifier will grant an authentication key to the platform indicating a validated attestation.

Attestation Features

The method of attestation may be based on the needs of the organization that requires trust computing.  As attestation models differ, a baseline of relevant features should exist with each model.  According to Coker et.al (2008), attestation architectures should have specific principles to become ideal for use.  The principles may include:

•           Current Information

•           Comprehensive data

•           Limited Disclosure

•           Clear Logical Language

•           Trust Mechanisms

An attestation model that is constantly analyzing real-time information generated by the target system is able to further detect anomalies that could affect the decision of the verifier.  The analysis can be conducted by using system measurement tools that would provide a comprehensive array of information.  The system measurement tool will conduct research on the specific portions of the system.  For example, measuring an operating system’s kernel can help verify that the target was not a subject to a network-based attack (Coker, et al., 2008).    The full-state or attributes of a target may become a necessary feature to validate system contents enabling the attestation system to function.  The internal system data within a target is vital when it comes to the attestation decision making.  However, the full contents of the system state should be controlled by the target itself.  Having the target control the flow of system information will assist in minimizing unnecessary information disclosure, which could potentially expose users within a secure environment.  The target system delivering the encryption key must have a platform language that the attestation system can comprehend.   Moreover, the attestation system must be acknowledged by the appraiser and target system.  This allows all parties to identify the amount of variation between each of the systems.


 

Attestation Architecture

The design of an attestation system greatly differs based on the needs of the organization using the method to securely process transactions.  A synopsis of five general requirements can provide meaningful content through the principles of attestation (Coker, et al., 2008).  The requirements of an attestation system should include:

•           Measurement Mechanisms

•           Self Protection

•           Delegation thru Proxies

•           Decision Management

•           Target Separation

Architects should devise a comprehensive plan to configure the measurement tool within an appraiser’s system.  Each measurement tool is suited to handle specific target parameters, and will not be able to assess various details from other targets as each target contains a unique output set of information.  Each measurement tool will understand the boundaries and limitations of the target and with the proper configuration the tool will generate meaningful decisions during an appraisal.  In addition to building a sound measurement tool, the system also needs a mechanism to secure the measurement process to prevent unwanted deviations.  A preliminary baseline analysis of the measuring tool should be taken into consideration prior to conducting attestation procedures.  By establishing a baseline analysis, the attestation system will confirm the overall integrity to monitor target information.

Trust and credibility between target and attestation systems present an obstacle for designers.  The information passed from the target to the attestation system will include sensitive parameters that require the utmost protection.  An additional intermediary called an attestation proxy can be established to ensure that the target system delivers the appropriate information, while the attestation system receives the necessary amount of data to perform its decision making requirements (Coker, et al., 2008).  Therefore, trust by the target and attestation system falls on the attestation proxy, rather than having a direct-trust relationship, which would present conflict if the target information is fallible.

Several target values may deliver vast amounts of information to a single attestation system.  Since an attestation system may be required to produce values that correlate to each specific target, a systematic application such as an attestation manager may assist as a databank to perform complex decision making in various scenarios.  The manager can disable specific measurement tools to increase efficiency based on each target’s system state.  While an attestation manager is helpful in the decision making process, a target that contains corrupt information can be a sign of manipulation, which could cause unwarranted modifications to the attestation system.  To prevent the setback from taking place, designers may have the ability to implement virtually created systems that will protect the source attestation system from any potential modifications.  A separate virtual machine will create an additional boundary between targets and an attestation system, ensuring that the target’s configuration, whether valid or corrupt will not have any control over the appraiser’s measurement tools.

TPM Vulnerabilities

Existing vulnerabilities can void the credibility of certain TPM processes through hardware related modifications.  Although viable solutions exist for most vulnerabilities, each attack poses a great risk to the process, creating the possibility of unforeseen consequences.  The most recent version of TPM, version 1.2, addresses critical vulnerabilities found in version 1.1.  In version 1.1, a simple hardware attack is possible using a 3-inch insulated wire to reset a TPM bus, bypassing the protective measures of TPM’s auditing mechanism (Lawson, 2007).  The issue is resolved with TPM version 1.2., however other vulnerabilities continue to exist.

TPM is vulnerable to replay attacks, which can cause redundant processes to occur unnecessarily.  Specifically, the trust computing protocols, Object Independent Authorization Protocol (OIAP) and Object Specific Authorization Protocol (OSAP) are subjected to numerous probes including replay attacks (Bruschi, Cavallaro, Lanzi, & Monga, 2005).  The replay attack vulnerability is mitigated through the implementation of two nonce, hash key messages known as a rolling nonce protocol (Chen & Ryan, 2010).  TPM may also be subjected to the involuntary, extraction of secret keys which could pose a TPM authenticity as verifiers cannot “distinguish between real TPMs and fake ones,” known as rogue TPMs (Brickell, Camenisch, & Chen, 2004).  Viable solutions against rogue TPMs involve the utilization of an intermediary to guarantee the legitimacy of the TPM’s endorsement key.

Future of TPM

TPMs modules are currently inside 600 million PCs as the technology becomes increasingly popular.  In the future, TPM expects to distribute their technology to an additional 500 million machines to include major organizations throughout the world by 2013 (Berger, 2010).  The potential increase in TPM demand creates a widespread issue for manufacturers, as hardware suitability becomes a major setback.  The call to redesign future hardware is a viable solution that would handle new TPM version capabilities (Schoen, 2003).

As the newest version of Microsoft Windows 8 is presented to the public, a security feature labeled Unified Extensible Firmware Interface BIOS standard, a trusted boot mechanism, will be provided by the computer’s TPM (Ashford, 2012).  This component has the ability to measure the BIOS during secure boot, which can be reported through remote attestation to a party that can certify the validity of the BIOS from any possible deviations.  The Trusted Computing Group (n.d.) has released TPM 2.0’s library draft specification to the public.   The additions include:

•           Algorithm Enhancement

•           Improvements to TPM availability

•           Enhanced TPM management

•           Addition Cryptographic services for BIOS security

 

Conclusion

            The success of cybercrime is a testament to the myriad of ways criminals can infiltrate organizations and cause havoc.  Whether it is through vulnerable, malicious, or misconfigured programs, social engineering, physical theft, or electronic eavesdropping, an intruder only needs to find one weakness to exploit (Challener, Yoder, Catherman, Safford, & Van Doorn, 2008).  This leaves organizations with the daunting task of securing their machines and training their employees against an ever changing array of attacks.  TPM and Trusted Computing look to address all of these attacks in one comprehensive way through the use of key management, authorization protocols, and attestation.  When used to its maximal potential, the consumer can trust that their machines will boot up in a valid configuration, securely store data, identify what user is on what specific machine, and participate in secure protocols with uncompromised keys (Challener et al., 2008).  All of these features work towards the goal of frustrating the efforts of intruders and improving the security posture of an organization.  Being able to do this in a way that requires no in-depth interaction from the end user also means that training costs, insider threat risk, and the danger of a user introducing a threat into the organization is reduced.  Managing the security of end user’s machines from a central root of trust will allow administrators to have a reduced attack surface area to be vigilant of.  The potential impact of TPM truly extends from the top of the organization to the bottom.  With the availability of software that allows organizations to take advantage of TPM, its adoption should grow.

One important piece of software that will assist with the acceptance of TPM is Windows 8.  In Microsoft’s latest version of Windows, TPM has been integrated at many points to enable the setup and management of TPM to be as easy as possible.  An organization that is running Windows 8 along with Windows Server 2012 can take advantage of features such as automated provisioning and TPM management, measured boot with support for attestation, TPM-based virtual smart card, BitLocker network unlock, and TPM-based certificate storage (Microsoft, 2012). These features allow Windows users to take advantage of and integrate with the full suite of TPM protections from key management, to remote attestation, to tamper detection.  As more and more organizations migrate to Windows 8 their ability to adopt TPM as a security policy should grow as hardware to support TPM is also easy to procure.  All of this means that the barrier to entry for organizations to implement TPM has never been lower.  Because TPM is able to address security concerns from a hardware based perspective, it provides a unique ability to bolster an organization’s security posture.  As such, the acceptance of TPM should only increase as the software infrastructure continues to mature to support TPM hardware.

TPM is a wide ranging technology that encompasses many areas of technology.  Because TPM spans so many areas such as key management, remote attestation, and authorization it holds the potential to have a big impact on the way organizations secure their networks.  This ability is attractive to a wide swath of industries such as banks looking to ensure customers have the latest secure software before connecting to their networks, media companies wanting to enforce DRM, or companies seeking to protect sensitive information on laptops.  Although there may be some parties that may have reservations about TPM, their concerns predominantly speak to the effectiveness of TPM and the level of security it achieves.  If an organization does not have concerns regarding computer forensics or end user’s rights, then TPM is an attractive avenue to pursue.  While TPM’s hardware and protocols may have a few vulnerabilities, they have not been shown to be easily exploitable or have disastrous potential and should not serve as the impetus to forgo using TPM.  With all of the benefits mentioned and the continued maturation and expansion of capabilities, TPM’s applicability should grow as well.

The topics explored throughout the course of this writing provide a foundation for understanding what TPM is, what it provides, its strengths and weaknesses, as well as future growth.  These subjects should serve as a comprehensive basis of key current issues and developments in TPM.

References

Aaraj, N., Raghunathan, A., & Jha, N. K. (2008). Analysis and Design of a Hardware/ Software Trusted Platform Module for Embedded Systems. ACM Transactions On Embedded Computing Systems, 8(1). doi:10.1145/1457246.1457254

Ashford, W. (2012). Will this be the year TPM finally comes of age? Retrieved from http://www.computerweekly.com/news/2240157874/Analysis-2012-Will-this-be-the-year-TPM-finally-comes-of-age

Berger, B. (2010). Securing Data & Systems with Trusted Computing Now and in the Future.. Retrieved from http://www.trustedcomputinggroup.org/files/static_page_files/C71DF61F-1A4B-B294-D01538F6E3B1C39D/DSCI_InfosecSummit_2010%2010%2002_v2.pdf

Brickell, E., Camenisch, J., & Chen, L. (2004). Direct anonymous attestation. In Proceedings of the 11th ACM Conference on Computer and Communications Security, 132-145.

Bruschi, D., Cavallaro, L., Lanzi, A., & Monga, M. (2005, December). Replay attack in TCG specification and solution. In Computer Security Applications Conference, 21st Annual (pp. 11-pp). IEEE.

Burmester, M., & Mulholland, J. (2006, April). The advent of trusted computing: implications for digital forensics. In Proceedings of the 2006 ACM symposium on Applied computing (pp. 283-287). ACM.

Cabiddu, G., Cesena, E., Sassu, R., Vernizzi, D., Ramunno, G., & Lioy, A. (2011). The Trusted Platform Agent. IEEE Software, 28(2), 35-41. doi:10.1109/MS.2010.160

Chen, L., & Ryan, M. (2010). Attack, solution and verification for shared authorisation data in TCG TPM. Formal Aspects in Security and Trust, 201-216.

Coker, G., Guttman, J., Loscocco, P., Sheehy, J., & Sniffen, B. (2008). Attestation: Evidence and trust. Information and Communications Security, 1-18.

Halderman, J. A., Schoen, S. D., Heninger, N., Clarkson, W., Paul, W., Calandrino, J. A., & Felten, E. W. (2009). Lest We Remember: Cold-Bboot Attacks on Encryption Keys. Communications of the ACM, 5(2), 91-98.

Lawson, N. (2007). TPM Hardware Attacks. Retrieved from http://rdist.root.org/2007/07/16/tpm-hardware-attacks/

Lioy, A. (2011, October 16). Remote attestation. Retrieved from http://security.polito.it/trusted-computing/remote-attestation/ Gunupudi, V., & Tate, S. R. (2007, May). Random oracle instantiation in distributed protocols using trusted platform modules. In Advanced Information Networking and Applications Workshops, 2007, AINAW’07. 21st International Conference on (Vol. 1, pp. 463-469). IEEE.

Liu, D., Lee, J., Jang, J., Nepal, S., & Zic, J. (2010, December). A cloud architecture of virtual trusted platform modules. In Embedded and Ubiquitous Computing (EUC), 2010 IEEE/IFIP 8th International Conference on (pp. 804-811). IEEE.

Morrison, A. (2010, July 21). The social hacking of the un-trusted platform module (tpm). Retrieved from http://blog.morrisontechnologies.com/2010/07/21/the-social-hacking-of-the-un-trusted-platform-module-tpm/ Pearson, S. (2005). Trusted computing: Strengths, weaknesses and further opportunities for enhancing privacy. Trust Management, 91-117.

Ryan, M. (2009). Introduction to the TPM 1.2. DRAFT of March, 24. Retrieved from https://www.cs.bham.ac.uk/~mdr/teaching/modules08/security/intro-TPM.pdf

Sadeghi, A. R., & Stuble, C. (2004). Property-Based Attestation For Computing Platforms: Caring About Properties, Not Mechanisms. In Proceedings Of The 2004 Workshop On New Security Paradigms, 67-77.

Santos, N., Gummadi, K. P., & Rodrigues, R. (2009, June). Towards trusted cloud computing. In Proceedings of the 2009 conference on Hot topics in cloud computing (pp. 3-3). USENIX Association.

Schmitz, J., Loew, J., Elwell, J., Ponomarev, D., & Abu-Ghazaleh, N. (2011). TPM-SIM: A Framework for Performance Evaluation of Trusted Platform Modules. DAC: Annual ACM/IEEE Design Automation Conference, 236-241.

Schoen, S. (2003). Trusted computing: Promise and risk. Electronic Frontier Foundation, 16, 26.

Seagall, A. (2011). Attestation and Authentication Protocols Using the TPM. Retrieved from http://www.cylab.cmu.edu/tiw/slides/segall-attestation.pdf

Trusted Computing Group. (n.d.). TPM 2.0 Library Specification FAQ. Retrieved from Trusted Computing Group. Retrieved from https://www.trustedcomputinggroup.org/resources/tpm_20_library_specification_faq

Van Dijk, M., Sarmenta, L. F., Rhodes, J., & Devadas, S. (2007). Securing Shared Untrusted Storage By Using Tpm 1.2 Without Requiring A Trusted Os. Technical report, MIT CSAIL CSG Technical Memo, 498.

 

1 Comment

Denial of Service (DoS) Detection, Prevention, and Mitigation Techniques

Denial of Service (DoS) Detection, Prevention, and Mitigation Techniques

Author Amy L. Wees

CSEC640

Abstract
Today most businesses host websites where customers can access their account information, employees can access timecards, conduct discussions, input customer information, track financials, and countless other activities. Without access to a network, productivity and profitability plummets. Denial of Service attacks aim large amounts of traffic at a server causing it to crash or become overloaded limiting access to legitimate customers. Denial of Service (DoS) attacks can do a lot of damage with little warning and much to recover for the victim (Goldman, 2012). For this reason it is imperative to detect, prevent, and mitigate DoS attacks where possible. This paper aims to summarize methods for DoS detection, prevention and mitigation based on the research of three separate sources.
Denial of Service (DoS) Detection, Prevention, and Mitigation Techniques
Corporations, schools, government agencies, and even home computer users conduct most of their business on a computer network by sharing information, resources, and files. This networking can be accomplished on a closed network or in most cases from one network or host to another via the Internet. As soon as information travels over the wire from one place to the next, it becomes vulnerable to interception, corruption, theft or misuse. Information entering a network from the Internet can also make an entire network and hosts vulnerable to computer viruses, Trojans, malicious malware, and a myriad of other dangerous possibilities.

Today most businesses host websites where customers can access their account information, employees can access timecards, conduct discussions, input customer information, track financials, and countless other activities. Without access to a network, productivity and profitability plummets. In September 2012, the websites of Bank of America, Wells Fargo, PNC, JP Morgan, and US Bank were inaccessible to customers for over a week during the largest reported Denial of Service attacks in history (Goldman, 2012). Denial of Service attacks aim large amounts of traffic at a server causing it to crash or become overloaded limiting access to legitimate customers. In the recent bank attack, large application servers were connected from various locations and used as a botnet to overwhelm the bank’s servers, resulting in an extended period of blocked access to customer financial information (Goldman, 2012). Botnets are often created from distributed computers which have been taken over without the user’s knowledge through the use of viruses or malware. Although this type of attack was thought to require a lot of preplanning; it was not very sophisticated and proves that Denial of Service (DoS) attacks can do a lot of damage with little warning and much to recover for the victim (Goldman, 2012). For this reason it is imperative to detect, prevent, and mitigate DoS attacks where possible.

This paper aims to summarize methods for DoS detection, prevention and mitigation based on the research of three separate sources. The research papers chosen for this summary are as follows:
1. A Taxonomy of DDoS Attack and DDoS Defense Mechanisms by Jelena Mirkovic and Peter Reiher
2. DDoS attacks and defense mechanisms: classification and state-of-the-art by Christos Douligeris and Aikaterini Mitrokotsa
3. Survey of Network-Based Defense Mechanisms Countering the DoS and DDoS Problems by Tao Peng, Christopher Leckie, and Kotagiri Ramamohanarao

These sources were selected because each uses similar methodology in analyzing DoS attacks. Each paper explores the different types of Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks, the tools available to both perpetrate and defend against attacks, and prevention or mitigation techniques. Where other research works focus on one technique for detection and prevention such as Internet Protocol (IP) trace back, packet filtering or flow control; the above listed works examine a broad range of practices for DoS and DDoS detection, prevention and mitigation based on the situations presented.

Attack Categories
There is not one clear path for detection DoS or DDoS attacks. Detection is very much dependent on the type of attack, the target and the perpetrator’s method. This paper will first describe the various types of attack and based on the sources chosen list various methods for detection, prevention or mitigation. Accidental denial of service such as a misconfigured computer or router will not be considered in this summary.

DoS attacks can be categorized by protocols such as network device layer, operating system layer, application layer, data flooding and protocol features (Douligeris & Mitrokotsa, 2004). Examples of these attacks, detection, prevention and mitigation strategies follow.

Network Layer Attack
At the network layer attacks target weaknesses in the software or hardware in devices such as routers. For example, Cisco 700 series routers are known to have a buffer overrun issue during the password checkout process. This weakness can be exploited by connecting to the router via telnet and entering lengthy passwords (Douligeris & Mitrokotsa, 2004). Routers can also be exploited by IP spoofing where IP packets containing forged information are sent to the router. Since the router has no authentication or trace-back mechanism the packets continue on their path with no way for the receiving target to detect what is happening or where that packets are coming from (Peng, Leckie, & Ramamohanarao, 2007). The problem is that spoofed packets can congest the bandwidth of the system allowing rogue and legitimate traffic; eventually denying service altogether.

Network Attack Detection
Router based attacks can usually be detected by monitoring the amount of traffic across the network. If the traffic is unusually high, this may be reason for concern. Those attacks which are launched at a numerous rates may be more difficult to recognize (Mirkovic & Reiher, 2004). A detection method called MULTOPS is mentioned by Mirkovic and Reiher (2004). MULTOPS will detect IP addresses that have been known to participate in DoS attacks and keep track of the packet activity per IP address. This allows the victim to possibly identify the attack source and filter or block the IPs immediately and in the future.

Network Attack Prevention and Mitigation
Mitigating attacks against routers or domain name service (DNS) servers can be accomplished through setting up secondary fail-over resources within the network design (Mirkovic & Reiher, 2004). To prevent attacks Peng, Leckie, and Ramamohanarao (2007) recommend packet filtering at the router to prevent spoofed traffic from entering the network with the caveat that filtering requires extensive deployment to be effective. Traffic should be filtered when entering and leaving the network and at each router along the way that way traffic that may be allowed to enter certain areas may be dropped throughout the process if it does not meet network criteria.

Operating System and Host Attacks
Attacks on operating systems utilize weaknesses in protocol execution. An example is the Internet Control Message Protocol (ICMP) flood. Since ICMP messages are usually broadcast to all machines in a network, an ICMP echo request can be sent to the broadcast machine and that request is forwarded to all network hosts, then when all hosts send the echo reply the traffic floods the victim’s network. This particular example represents a “smurf” attack (Peng, Leckie, & Ramamohanarao, 2007). Another ICMP related attack is the ping of death which sends echo requests larger than the maximum IP size which crashes the victim’s machine (Douligeris & Mitrokotsa, 2004).

The SYN flood attack exploits the three-way handshake required of a TCP connection to overload the memory of the targeted machine. The memory is overloaded because the attacker sends false IP addresses to the targeted machine and that machine stores the initial contact in its memory stack, waiting for a response to complete the data connection. Because a response will never be sent from the false IP the machine has too many half-open connections and the memory stack eventually times out (Peng, Leckie, & Ramamohanarao, 2007).

Operating System and Host Detection
To detect TCP SYN floods, Mirkovic & Reiher (2004) recommend using a standard detection strategy that is based on a rule-set that looks for half-open TCP connections allowing for deletion from the memory stack. Batch detection can also be used to detect SYN floods and captures statistical information about incoming traffic over time. When the traffic patterns change, an attack can be detected (Peng, Leckie, & Ramamohanarao, 2007).

Operating System and Host Prevention and Mitigation
Mitigation of the ICMP flood can be accomplished by disabling the automatic rebroadcasting service or reconfiguring the routers to forward only specified traffic. To prevent a SYN flood, the operating system can be set to limit the number of TCP connections waiting for response and eventually drop them after a timed period (Peng, Leckie, & Ramamohanarao, 2007). To further prevent TCP SYN attacks, protocols on host machines should be patched and updated often (Mirkovic & Reiher, 2004).

Application Layer Attacks
Applications on hosts can also be attacked based on their vulnerabilities. One instance given by the Mirkovic & Reiher (2004) is attacking an authentication server by sending phony signatures. The server will continue to function otherwise but any other application requiring authentication will be denied to the user.
A more commonly seen application level of attack is high traffic on a web site causing the web server to crash. This can be accomplished through a website’s search engine, forms, account request pages or number of simultaneous visits such as an HTTP flood. Because the Internet is utilized so heavily, most firewalls allow open traffic on port 80 (http) making it a prime target for attack. During an HTTP flood, distributed attackers, known as botnets, will flood the web server with requests. Most botnet software is designed to help attackers avoid detection by hiding IP addresses and pushing large files to sites taking up even more bandwidth (Peng, Leckie, & Ramamohanarao, 2007).

Detection of Application Attacks
Detecting application attacks is problematic because there is not a complete denial of service, the malicious activity level is very low and packets are not necessarily identifiable. In order to detect application level attacks Mirkovic & Reiher (2004) recommend monitoring each application in the intrusion detection system and screening regularly for suspicious activity. HTTP floods can be detected by looking for repeat requests for large files and then blocked by the server (Peng, Leckie, & Ramamohanarao, 2007).

Prevention and Mitigation of Application Layer Attacks
Douligeris & Mitrokotsa (2004) reference throttling as a mitigation tactic. Web servers which are overloaded can set router throttles so that all traffic passing through the router is limited to the throttle limit set. This prevents the web server from becoming overloaded and crashing. This also limits requests pushing large files (thus over the throttle rate) from reaching the server and allows legitimate requests through. The throttling method has not yet been proven in a large commercial setting.

Mirkovic & Reiher (2004) recommend overall system security to defend against DDoS atack. Ensuring system security is in place such as intrusion prevention and detection system, and security patching on all hosts. The idea is that attackers are able to gain control of zombie machines for botnets because so many machines are not secured properly. If simple security recommendations were followed the chances of attackers gaining control of such a militia of machines would be lessened and the level of attack subsequently lessened.

Conclusion
Denial of service and distributed denial of service attacks can happen at many different layers and levels within a network. The examples given in this paper only scratch the surface of what is possible. The sources used for this summary offer a wealth of information on the detection, prevention and mitigation of these attacks, all of which are significant in understanding the scope of the problem. Most importantly, they provide a way ahead for securing systems against specific attacks and make valid the difficulty in complete detection, prevention, and mitigation of denial of service attacks.

 

References

Douligeris, C., & Mitrokotsa, A. (2004). DDoS attacks and defense mechanisms: classification and state-of-the-art. Computer Networks, 643-666.
Goldman, D. (2012, September 28). Cnn money. Retrieved from http://money.cnn.com/2012/09/27/technology/bank-cyberattacks/index.html
Mirkovic, J., & Reiher, P. (2004). A Taxonomy of DDoS Attack and DDoS Defense Mechanisms. ACM.
Peng, T., Leckie, C., & Ramamohanarao, K. (2007). Survey of Network-Based Defense Mechanisms Countering the DoS and DDoS Problems. ACM Computer Surveys, 1-42.

 

2 Comments

iTrust Database Software Security Assessment

iTrust Database Software Security Assessment

Security Champions Corporation (fictitious) Assessment for client Urgent Care Clinic (fictitious)

Amy Wees, Brooks Rogalski, Kevin Zhang, Stephen Scaramuzzino and Timothy Root

University of Maryland University College

Author Note

Amy Wees, Brooks Rogalski, Kevin Zhang, Stephen Scaramuzzino and Timothy Root, Department of Information and Technology Systems, University of Maryland University College.

This research was not supported by any grants.

Correspondence concerning this research paper should be sent to Amy Wees, Brooks Rogalski, Kevin Zhang, Stephen Scaramuzzino and Timothy Root, Department of Information and Technology Systems, University of Maryland University College, 3501 University Blvd. East, Adelphi, MD 20783. E-mail: acnwgirl@yahoo.com, rogalskibf@gmail.com, kzhang23@gmail.com, sscaramuzzino86@hotmail.com and Chad.Root@gmail.com

 

Abstract

The healthcare industry, taking in over $1.7 trillion dollars a year, has begun bringing itself into the technological era.  Healthcare and the healthcare industry make up one of the most critical infrastructures in the world today and one of the most grandiose factors is the storage of information and data.  Having to be the forerunner of technological advances, there are many changes taking place to streamline the copious amounts of information and data into something more manageable.  One major change in the healthcare industry has been the implementation of the Electronic Medical Record (EMR) systems.  Having risks and benefits, the electronic medical record systems will strive to provide and change the way healthcare industry will operate.  iTrust is a role-based health care web application.  Through this system, patients can see and manage their own medical records.  Medical personnel can manage the medical records of their patients including those provided by other medical personnel, be alerted of patients with warning signs of chronic illness or missing immunizations, and perform bio-surveillance such as epidemic detection.  Today, the gradual introduction of a of these electronic medical records lie at the center of the computerized healthcare industry and are slowly being implemented to provide modern technologies such as cloud database systems and cloud network storage as well as a way to streamline the medical data and patient information process.

 

Keywords: iTrust, database, cloud computing, software security, application security

 

iTrust Database Software Security Assessment

Security Champions Company is a software security company that specializes in assessment and analysis of software used primarily in the medical field.  Urgent Care Clinic has hired Security Champions to assess the primary cyber threats and vulnerabilities associated with the use of the open source electronic medical records software “iTrust”.  As much of the medical industry is moving toward electronic medical records (EMR), we want to ensure our client is in compliance with various stringent regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes-Oxley Act (SOX).  We will also provide a risk assessment and ease-of-attack threat analysis for several new requirements Urgent Care Clinic has requested to add to the iTrust software.  The following four requirements are reviewed and assessed:

  1. Add role for emergency responders to view patient emergency reports containing medical information such as allergies, current and previous diagnosis, medication and immunization history as well as blood type.
  2. Allow patients to search the database for qualified licensed health care professionals (LHCP) for specific diagnosis.  The patient will be able to view the doctor’s name, number of patients treated for the specified condition, laboratory tests requested and medication used to treat the diagnosis as well as patient satisfaction ratings.
  3. The third requirement is to update the diagnostics code tables to reflect new ICD-10 coding standards outlined by American Medical Association guidelines.
  4. The last requirement is to allow a patient to view the access log for their medical records in an online cloud database system.  This allows the patient to see what changes are made to their records and who made those changes.

iTrust Database Software Overview

iTrust is an open source software application created and maintained by engineers at North Carolina State University.  The software allows for medical staff from various locations to access patient records, schedule visits, order medications and laboratory tests, and view records, diagnosis and test results.  iTrust also allows patients to manage their care by viewing records, scheduling office visits, and finding health care providers in the area (UMUC, 2011).

iTrust Database Table Security Assessment

Each Champion Security teammate individually assessed the security of the various database tables in the iTrust database.  The tables were rated and are limited to the numerical choices 1,2,3,5,8,13,20,40,100 with 1 being the lowest security rating and 100 being the highest.  The Appendix A table represents each teammate’s individual values (noted by initials) and the average rating of those values combined in the highlighted column (see Appendix A: Table 1 – Database Table Value Points).

 

Analysis of New Requirements

The information age is growing exponentially, and the more resources and information that can be gained is critical.  This stands true to the medical field, particularly medical staff, emergency responders and patients.  Adding new requirements to the iTrust system allows for better care, medical attention, and informative information for the client.  These new requirements will enhance Urgent Care’s communication capabilities and allow for greater success.  By reviewing case-by-case scenarios regarding medical information and background information, these requirements benefit every aspect of Urgent Care’s Clinic.  The following analysis will provide more information on the new requirements.

Emergency Responder

Urgent Care Clinic is requesting four additional roles and allowable access to the iTrust healthcare cloud database system and application.  The addition of these roles and access points will be valuable to emergency responders and individuals who are seeking information for their own medical care instantaneously.  An emergency responder or a first responder is considered anyone who is qualified and certified in providing pre-hospital care prior to the patient entering the medical facility.  The need for these responders to have access to essential health information is necessary in order for them to provide the most appropriate and advance medical treatment in measures to save a patient’s life.  The responder could stabilize, treat and perform certain medical procedures on the patient according to their personal medical history (Department of Health & Humans Services, 2006).  The responder would need access to allergy, blood type, prescription history, medical history showing prior surgeries and prior diagnosis information.  This vital information is valuable when assessing individuals in the field.  The procedures that the responders are completing on the patient can then be documented in the iTrust system, so when the individual arrives at the hospital the attending medical staff can view what was done and can evaluate patient treatment from that point forward.

In order for the responder to get authorization and then be authenticated to the system, the use of a biometric control would be applied.  This authentication procedure would be extremely beneficial in the field due to the stress of the job.  The responder would use his fingerprint to gain access and then proceed to the required medical information.  The use of a password and user ID would slow response to the patient because the responder would have to remember that information.  If responders are unable to gain access and the proper medical care is not administered, this could lead to law suits or even death.  HIPAA would have a role in allowing these types of responders to gain access.  The patients could sign a HIPAA waiver during a doctor visit and have it kept in a database so access could be granted without hassle.

Find qualified licensed health care professional

            Allowing the patient access to search the iTrust cloud database system for a LHCP, would give them more control over their health care and enhance the quality of care that they receive based on their preferences and diagnosis.  This requirement will also give the user relevant information on where the medical facility is, how many people have been referred to the facility, what doctors are considered experts in a particular field, what procedures were used, and the satisfaction of the people that have been seen at that particular facility.  Allowing the patient to name their own medical preferences would also decrease man-hours for the staff normally responsible for these tasks.

Providing patients electronic health/medical information by means of a cloud database system makes way for streamlined care by providing the latest medical reports in an instantaneous manner, allows for rural individuals to gain access to specialized medical procedures, and may cut costs in certain healthcare facilities (Polito, 2012).  The advantage of being able to cut the number of patients in any single facility would allow for better care of patients, decreased wait times and a more precise diagnosis because each patient’s current medical and health information and patient history could be reviewed thoroughly and quickly by medical staff.  Finding the right health care professional would also allow the patient the opportunity to have predetermined questions to ask, have selective information prior to attending the visit, and be more advised on what to expect during the entire process.  This information could be very valuable to patients and medical providers because they would not waste their time on individuals that may not have a certain medical problem; in which case the patient would have to be referred to another doctor.  This access could provide more efficient and effective medical care.

Update diagnosis code table                

ICD-9CM is an outdated medical code system and the new internationally used code system is ICD-10.  This new code system needs to be updated in the iTrust medical application so that medical providers can accurately diagnose patients and medical staff knows what the history of the patient was.  Updating the coding system will provide proper analysis, quality management within the medical profession, increased productivity and overall compliance with medical regulations (Bounos, n.d.).  Entering the new codes would allow a patient to be seen at multiple facilities throughout the world and all medical care providers would understand the prior history.  Outdated material could cause errors in treating the patient and possibly cause severe physical harm to a patient. The ICD-10 has significant improvements which allow for diagnosis of symptoms to have fewer codes to describe the medical issue and information regarding ambulatory and managed care encounters (Centers for Disease Control and Prevention, 2012).

View Access Log

The last requirement that iTrust would like to make available to the patient is the ability to view access logs for their medical records.  The access log would provide information regarding who updated their medical information and when it occurred.  This could be very valuable to a patient as they can communicate with the facility regarding any discrepancies in their chart.  This might also act as a check and balance system between the patient and the provider, which could also assist in medical insurance billing and payment information.  For example, if a patient was diagnosed with ailment X, but the provider mistakenly coded ailment Y in the system, the insurance company may or may not cover the cost of the visit or associated procedures.  Allowing the patient to view the access log information can be provided to the insurance provider and the medical facility for correction.

Another advantage of allowing patients to view the access log is so they are able to see if someone compromised their medical information.  Being able to catch this breach early enough may allow law enforcement time to track the perpetrator before that information is used in a manner non-conducive to the patient.  Without access, the likelihood of a patient knowing their information has been tampered with is severely lessened.

HIPAA was enacted to ensure that personal medical and health information remains secure from others that could use the information wrongfully or intentionally against an individual.  HIPAA allows the patient more control over their personal information, applies limits on who can see the information and on what information is disclosed (Thacker, 2003).  This law itself provides the patient with access to their medical information and the ability to see what was logged in their records.

Applying the new requirements to the iTrust medical cloud database system allows responders, medical professionals and patients the ability to see information that could lead to a proactive sense of medical care.  The efficiency in how medical care is provided could save on medical care costs and make hospital visits more effective due to the limited number of individuals waiting to be seen. The patients will have the option to make the informed decision on which doctor they will see and also have more background information before they see any individual.

 

Ease of Attack

            The iTrust cloud database is relational and made up of tables that account for all the data processing needs of a medical office.  The tables record transactions and patient information.  Specifically, data is recorded for all patients, is considered personal health information and falls under the Health Insurance Portability and Accountability Act (HIPAA).   New requirements to the database will pose risks to the confidentiality, integrity and availability (CIA) of data if threats are not mitigated.

The following tables provide supplemental data that feed into the patient record and transaction history.  These tables include medical procedures (table: cptcodes), lists procedures performed at office visits and hospitals.  Hospitals (table: hospitals), lists hospitals in the system.  Diagnosis and immunization (table: icdcodes), lists diagnoses and immunizations with codes.  The standard medication codes (table: ndcodes) provides a list of medications.

Other tables in the system are relational and are linked to tables in the system through ‘id’ fields.  Allergies (table: allergies) links with the patient record listing allergy by type or description.  Lab procedures (table: labprocedure), provides information on what was performed during an office visit and is related to the patient, and office visit tables.  Login failure log (table: loginfailures) logs failures, records the date, time, and IP address.   Office visit table (officevisits) relates the patient id, hospital id, and office id to the office diagnosis (table: ovdiagnosis), office medication (table: ovmedication), office procedure (table: ovprocedure), office survey (table: ovsurvey) tables.  The patients table is a central table that contains personally identifiable information, and relates to the patient health information table, personnel, lab procedure, users, and transaction log tables.  Medication codes (ndcodes), office visits (table: officevisits), office visit diagnoses (table: ovdiagnosis), office visit medication prescription (table: ovmedication), office visit medical procedure (table: ovprocedure), office visit survey (table: ovsurvey), patients (table: patients), personal health information (table: personalhealthinformation), personnel (table: personnel), transaction log (table: transactionlog), and users (both patients and personnel called ‘users’).  Figure 1: Relational Design shows the relations between the tables.

Ease of attack is the calculation of valued risk by table (value points) on a scale of 1-100.  The value points show which table will be least attractive and which table will be most attractive to attack.  Ease points are calculated by determining the average value points for each requirement multiplied by the maximum value (or highest value) to obtain a security risk value.  The requirements are ranked by security risk, where a higher value means a higher ease of attack and a lower value means a lower ease of attack (see Appendix B: Table 3 – Security Risk).  The requirements in order of ‘ease of attack’ are the ability to view the access log of who has viewed their medical records by date, an additional role of emergency responder (ER) who will be able to see a ‘report’ of the patient that details vital medical information, the ability to query for a medical professional according to diagnosis and their zip code, finally an update to the diagnosis code for all diagnoses beginning January 1, 2010.  Ease of attack is calculated by a ranked risk of tables used by each requirement (see Appendix C: Table 3 – Security Risk).

The most vulnerable requirement is providing the patient the ability to view their access log.  The access log provides vectors of attack that allow the potential malicious user to take advantage  of the inference problem (Newman, 2009) to create a picture of internet protocol (IP) addresses within the network, the users medical identification (MID), and action.  A good configuration of the network will allow the attacker to focus in on subnets and eventually build an attack using database foot printing (McDonald, 2002) and network configuration.  Personnel and other patient’s user IDs could be vulnerable to being seen.  An attacker can infer what user IDs belong to certain personnel and eventually determine the level of access (i.e., Doctor, Administrator, etc.).

 

 

Figure 1: iTrust Relational Design


Allowing emergency responders the ability to pull a report that will show the patient’s vital statistics has several vulnerabilities.  The vital record statistics can be accessed from a police cruiser, ambulance and possibly through a smart-phone type device.  As this information is considered personal health information, there is a possibility that records can be left in the open, or accessed by any emergency responder regardless of an emergency.    A malicious user can glean a lot of knowledge for further attacks by inferring records combined with using an emergency responder access level.  Personal records provide information into other hospital or provider accounts that may be exploited to gain either more information or elevated access to other systems within iTrust.

The need to update diagnosis codes throughout the system implies that the access control level providing the ability has access to most all tables within the iTrust system.  The threat is low, all diagnoses must be coded with ICD-10 rather than ICD-9CM and saved to the patient and healthcare provider record.  A malicious attacker could use this attack vector to establish the database’s footprints as a platform for further attacks.

The hardest attack vector will be the ‘find qualified healthcare professional’ requirement.  A diagnosis and zip code are all that are needed to query the database to pull up potential healthcare providers in the patient’s locality.  If the malicious attacker has already exploited one of the easier vulnerabilities presented by the new requirement(s), data provided in a report could help determine how a database is structured.

The new requirements assume that the system as a whole is secure and has not already been breached.  The relational database’s design provides for efficiency in data processing and access.  The design also presents challenges to security if they are not mitigated.  A malicious attacker can infer a lot about the patients, personnel, and users within the database because of its relational design.  Security mitigations need to provide a level of confidentiality to ensure personally identifiable information is not vulnerable per regulations ensuring privacy.

Threats, Vulnerabilities, and Liabilities at Urgent Care Clinic

With the advancement of technology and the growing trends of enterprise networks, medical clinics like Urgent Care are becoming innovative and adopting new forms of database storage and network systems.  This means the implementation of a cloud database system or a form of cloud storage.  Cloud database systems are all the rage, and in a broad sense, they refer to virtual servers housed on the Internet used for storage of data.  The cloud database system focuses on increasing the capabilities and the capacity of network storage without having to invest in a new infrastructure.  It is a technology that utilizes remote servers to maintain data and applications that can be accessed by consumers and businesses at any time and from anywhere using the Internet (Gruman & Knorr, 2012).  It refers to many different computing models like Platform as a Service (PaaS)[1], Software as a Service (SaaS)[2], and Infrastructure as a Service (IaaS)[3].  However, implementing a medical cloud database system or utilizing Urgent Care Clinic’s iTrust cloud database services can have both positive and negative effects on data security and consumer availability.

Technological advancements and the unlimited accessibility involved with cloud storage, especially electronic medical records, opens additional avenues for vulnerabilities and threats against data, network systems, and company reputations (Trend Micro, 2011).  Cloud storage is here to stay and some very important threats need to be addressed in iTrust’s cloud database systems. With large amounts of data being transferred to the cloud storage servers, physical attacks are turning to network infiltrated attacks and the abusive use of the cloud through dishonest activities.  Because cloud computing and cloud storage deals with privacy and a seamless and easy registration system, criminals using new and advanced technologies are targeting weak registration systems and sliding under the limited fraud detection software.  This ranks as an extremely high security risk for both businesses and consumers utilizing the iTrust system.  The potential use of botnets has the ability to infiltrate a public cloud network and spread malware and viruses to thousands of computers.  This has already been seen in the real world with the “Zeus Botnet” attacking the Amazon cloud.  The “Zeus Botnet”, having infiltrated Amazon’s EC2 cloud computing service, installed a virus and took over complete command control of a high performance cloud platform (Cimpanu, 2009).  This malware caused a system wide outage while remaining hidden and transferring millions of dollars; it had to be dealt with.  In this instance and other similar instances, publicly blacklisting IaaS network addresses has been one way to combat and defend against spam and phishing.  To further defend against such risks, enhanced monitoring methods dealing with registration and initial validation, should be initiated. Whether implementing cloud analytics software or just more personnel for monitoring purposes, defending against such malware or botnets is a move in the right direction.

Based on the Cloud Security Alliance (CSA), another threat to cloud computing and virtual data storage is making sure that the security implications associated with usage and integrated into the service models are understood by consumers.  Relying on a weak set of application programming interfaces (APIs) exposes organizations to a variety of security related issues including availability, confidentiality, and accountability (Cloud Security Alliance, 2010, p. 9).  Rectifying this situation involves better authentication and encryption procedures on access controls.  Also, examining security models of iTrust’s data storage interfaces will help to reduce the susceptibility of attacks on a company’s cloud network. What does this mean for consumers?  As for the users and consumers of Urgent Care Clinic’s cloud network, this only helps and instills them with a sense of confidence when they are logging into the virtual storage network; their access information and registration will be kept confidential and secure.

Risks on Urgent Care’s end are also apparent, this means having a separation of duties so as to not run into malicious attacks from a single insider with too much power and access.  When ranking this threat among others, it needs to be at the top of the list.  The impact of a malicious insider can be devastating to users and have and even larger impact on the organization.  Although this has not been seen in the aspect of cloud computing and using virtual database systems today, insider attacks do happen.  Financial crash, productivity loss, and damage to the company’s integrity and reputation are just a few areas affected by malevolent insiders.  As the human element takes over and companies move toward virtual storage systems, it becomes critical for consumers to understand what policies and procedures are in in place to thwart such wicked attacks.  Urgent Care needs to enforce stricter supply chain management systems by incorporating a separation of duties as a way to bring checks and balances into the iTrust application.  Also, security breach notification policies need to be applied and employees need to be aware of their surroundings and report any atypical information or suspicious behavior.  Training and preventative measures go a long way in preserving company data and brand reputation.

ITrust cloud database users must also be aware of the issues revolving around shared technologies.  Whether this be virtual machines, management technologies, or communication systems, these shared technologies have never been set up for strong compartmentalization.  As a result, hackers or malicious individuals focus on how to influence the processes of other cloud database customers, and how to gain unlawful access to sensitive medical data.  For example, a zero-day attack or one that exploits computer application vulnerabilities, such as the blue pill technology, has the potential to spread rapidly across a public cloud and expose all the data within the server.  The “Blue Pill Technology” is a program written directly for a particular operating system that, if implemented, could embed malware into a system and go undetected until it is too late. Because this is a phishing style attack, implementation of security best practices with high-tech monitoring software will help prevent such things from happening.  In addition to monitoring software, enforcing some sort of service level agreement for vulnerability remediation and continual scanning and auditing will continue to help keep clinical data and the iTrust virtual storage system secure.

According to cloud computing and cloud database system standards, another risk that ranks up there with the other threats and vulnerabilities is the sense of security and protection of sensitive information and personal data.  Companies need to secure information and data through identity management.  Identity management has been described as the most essential form of information protection that an organization can use (Aitoro, 2008, Para. 3) and can be defined as the process of representing, using, maintaining, and authenticating entities as digital identities in computer networks (Seigneur & Maliki, 2009, p. 270).  Along with the need for identity management is the requirement for accurate auditing and reporting due to federally mandated, regulatory, and compliance directives such as Sarbanes-Oxley (SOX) and Health Insurance Portability and Accountability Act (HIPAA).  The Sarbanes-Oxley Act, enacted in 2002, is legislation designed to protect shareholders and the general public from accounting errors and fraudulent enterprise practices. The SOX act is governed by the Securities and Exchange Commission (SEC), which sets deadlines for compliance and publishes rules on requirements. Additionally, Sarbanes-Oxley outlines which records are to be stored and for how long (Spurzem, 2006, Para 1).  The Health Insurance Portability and Accountability Act (HIPAA) provides federal protections for personal health information held by covered entities and gives patients an array of rights with respect to that information. Furthermore, the HIPAA Privacy Rule is balanced so that it permits the disclosure of personal health information needed for patient care and other important purposes.  The HIPAA Security Rule specifies a series of administrative, physical, and technical safeguards for covered entities to use to assure the confidentiality, integrity, and availability of electronic protected health information (U.S. Department of Health and Human Services, n.d., Para. 1).  As a direct result of the organizational needs for both identity management and accurate federal reporting, identity management systems were developed that provide the ability to log, control, audit and report on end user access to particular information assets and serve as the foundation of an organization’s threat and overall compliance strategy (DeFrangesco. 2009, Para. 1-2).  Identity management systems are designed to create processes, which address the five fundamental aspects of identity management — Authentication, Authorization, Accountability, Identification, and Auditability (UMUC 2, 2010, p. 5).

Additionally, there are many ways in which data can be compromised and not having a backup of sensitive material remains the biggest fault among users and organizations.  Because data loss can have an overwhelming negative impact on a business, it is in the best interest of the company to provide proper policies and hardware for data duplication.  Not having data backup in place only renders a company’s information unrecoverable.  The threat that data will be compromised in the cloud increases due to the number of and interactions between risks and challenges which are either unique to virtual database systems, or more dangerous because of the operational characteristics of the virtual cloud environment (Cloud Security Alliance, 2010, p. 12).  Besides the company’s reputation and integrity being compromised, there is a significant negative influence on customer morale and trust. A company is only as good as the quality of work it produces and when data is leaked or lost, users are not happy.  Cloud information systems and implementing virtual data storage revolves around the ability to access sensitive data and personal information at any time and if this service were to go down or be compromised, a company’s reputation will be ruined and a significant financial impact will be placed on the organization.  Even worse, depending on the incident, a company might incur legal ramifications for possible compliance violations.  For an organization to avoid severe occurrence of data loss, multiple backups should be in place and the data being stored on the network needs to be encrypted so it can be secure in transit.  Not only will this provide a sense of data integrity for Urgent Care, it will offer peace of mind to the consumers utilizing the iTrust virtual data storage system.

Another issue that remains a legitimate threat to iTrust users is account service and traffic hijacking.  Ranging from phishing to spam, stolen user credentials or mobile devices allow hackers to infiltrate full company networks.  With sensitive data being hosted on virtual servers, hackers have an all access pass to everything just by gaining simple entrance, user login information, or an unmonitored mobile device.  Because untrained and gullible employees remain the easiest point of entry, an attack on passwords, devices, and user credentials remain at the top of the charts.  If an attack were to happen on the Urgent Care network, the hacker would have the ability to monitor transactions, manipulate data, and steal personal customer information all at the click of a button.  Preventative measures must be taken by applying password policies, tracking software, and Internet usage information to all employees.  Employees must keep personal information and credentials to themselves and appropriate monitoring software must be introduced to oversee all activity within an organization.  “Organizations like Urgent Care should be aware of these techniques as well as common defense in depth protection strategies to contain the damage resulting from a data breach” (Cloud Security Alliance, 2010, p. 13).

Finally, when adopting Urgent Care’s virtual storage network service, it is important to provide users with PCI compliant software services.  Standards, compliance of internal security procedures, or the information that might be disclosed after an incident occurs, tend to be overlooked and cause and unknown risk profile when moving ahead with cloud computing.  Because companies want to move forward with virtual network storage due to the low costs and other benefits that come with implementation, often these overlooked questions, like how is data being stored or who has access, may lead to serious malicious threats.  Unknown risk profiles can better be understood when analyzing the Heartland Data Breach.  In May of 2008, Heartland Data Center, the fifth biggest payments processor in the United States, was hacked into using known-vulnerable software.  This known vulnerable software came packed with a few loopholes to allow hackers to embed a data sniffer, capturing credit card information, card numbers, expiration dates, and internal bank codes, to allow them to duplicate cards and steal customer and business finances (Slattery, 2009).   Once Heartland knew about the issue they only took minimal steps to rectify the situation.  Heartland did not take the extra effort to comply and notify every single user that was affected.  Rather, they were only willing to do the bare minimum to comply with state laws.  If an organization is to learn from past mistakes and take anything from the Heartland Data Breach, then it would be to go above and beyond the bare minimum to not only be in compliance with state and federal laws, but to contact every affected user while incorporating proper incident response procedures.  Not abiding by these rules or taking the extra step to have a proper incident response plan can cause Urgent Care’s reputation to take a dive and will in turn, have negative ramifications on existing and future customers from utilizing the iTrust database service.  Chris Whitener, Chief Security Strategist for Hewlett-Packard, said, “companies should not jump into the cloud or virtual network storage without a proper risk assessment” (Mimoso, 2010).  Organizations need to be aware of the risks and evaluate the vulnerabilities as needed.

In summary, the cloud and virtual database storage is, and will continue to grow and be part of the critical infrastructure of many businesses such as Urgent Care, and so must the security and response policies and procedures be considered when migrating to the iTrust virtual storage system.  “This role is likely to grow as a multitude of new services are developed and commercialized and users’ level of familiarity and comfort, with this approach to service delivery, develops and grows” (Kate, 2011).  Companies are in it for the cost and benefits that can be gained from cloud computing and virtual storage systems but they should be focused on the consumer and end-user aspect of the business.  This is what is going to drive a company to the next level.  Ultimately, the end-user is the one experimenting and taking the risk by providing a facility such as Urgent Care with sensitive personal data.  Organizations taking the next step to secure, monitor, and regulate the information housed on the virtual database network are the ones more likely to give peace of mind to the end-user.  “From this study of current cloud computing and virtual storage practices and inherent risks involved, it is clear that at present there is a lack of risk analysis approaches in the cloud computing environments. A proper risk analysis approach will be of great help to both Urgent Care and their patients. With such an approach, patients and staff can be guaranteed data security and Urgent Care Clinic can win the trust of their customers” (Angepat & Chandran, 2012).  Cloud computing serves as an ever-growing technology for storage and data processing and the threats and vulnerabilities involved to hang by the wayside.  It is and will always be in the best interest of a company like Urgent Care to test these threats and make changes above and beyond expectations.  If Urgent Care’s integrity is compromised, then what else is there? Nothing.  It is in the best interest of any organization, to make an effort to fortify their cloud network and take into consideration the threats focused on in this paper, to give ample knowledge to defend against attacks and beef up security.

Changes to Security Management Policies

With the inclusion of the new requirements, changes will have to be made to the security policy in order to reduce risk. These changes can come over time, but it must also take as little time as possible. The first step is to improve authentication protocols, such as using stricter password requirements or PKI-based authentication (Katsumata, Hemenway, & Gavins, 2010). Admittedly, a more stringent password requirement may be more of a hassle for patients. On the other hand, employees should be expected to have strong passwords or utilize the PKI system. Cost may factor into this change, and indeed the integration of a PKI system can cost up to $1,000,000 (Katsumata et al., 2010). However, the risk reduction is far higher for PKI compared to passwords, and the cost-to-benefit ratio is much lower in comparison (Katsumata et al., 2010). Security is an investment, not only to the company, but also its users.

The inclusion of regular audits can help improve and refine access controls, making sure that employees have the correct authentication and patient access is both secure and unencumbered (Sommer & Brown, 2011). Penetration testing can find weaknesses in the system as the new roles are established and the entire system changed to incorporate the new security measures (Sommer & Brown, 2011). Lastly, plans for disaster recovery and mitigation should be prepared. Even with all the latest technologies and best policies, there is always the chance that someone will have the luck or skill of breaking through all the barriers. As such, having contingency plans can help reduce the impact of a security breach (Sommer & Brown, 2011). Data redundancies and system technicians trained and prepared for such a crises can help mitigate damage considerably (Sommer & Brown, 2011).

Each requirement needs to be fine-tuned over time for any potential leaks and security hazards. For example, accessing system logs would require strong authentication and regular audits. The audit itself can be a universal security check-up on each requirement, as the audits seek out both weaknesses and discrepancies in the system (Sommer & Brown, 2011). Be it patients’ access rights, emergency responder accesses, or system administrator access, each profile must be scrutinized for properly configured permissions and access controls. Of the three requirements, updating the diagnosis code table has the lowest priority. Authentication is still necessary, but the team had decided that it was unlikely to be a target for attack. Coming to this conclusion was most definitely a team effort.

Reaching consensus over prioritization of security issues was a surprisingly uncomplicated task. Each member of the team reviewed the iTrust addendums and filled out the tables per individual opinion. Individual tables were collected and the values averaged. This way, every team member’s opinion is taken equally and fairly. Fortunately, while there were minor differences on security values and ease of attack points, all team members had very similar tables regarding prioritization. Every team member agreed that certain tables, such as cpt, hospitals, icdcodes, and ovprocedure, were not of high value for attacks. Similarly, the team had similar opinions in that the patient, personalhealthinformation, personnel, and users tables were of the highest values, and thus, the most likely to be attacked. Despite the new requirements needing different access levels to different tables, the team determined that all new roles are equally viable, and highly vulnerable, to attack. This was because of number of tables each role needed access to; each requirement would access a high-risk table at some point. As a result, the team agreed that all new requirements were at high risk for attack. Lastly, using the ease-of-attack value combined with the asset value, the team was able to prioritize the security issues.

Conclusion                                                                                                         

There are always lessons to be learned when reevaluating an existing security policy. It is foolhardy to blindly set up new requirements and roles without properly assessing the risk factors these new roles may introduce. Rather, it is important to examine both the new requirements as well as currently established roles and determine the level of risk they represent. By looking at this objectively, we can produce a priority list.  In establishing a priority list, we are better able to relegate appropriate resources to protect particularly vital data tables without compromising the overall security of the network. Since these modifications will reflect on security as a whole, we must be careful in making these changes. Ensuring compliance with federal standards is a fantastic first step in the right direction, but we must also look to exceeding these minimum requirements. This leads to establishing trust between provider and client, and trust is what builds successful relationships. An important lesson learned is to make certain that we both deserve and can hold onto the trust of clients, and an excellent way to do so is to make their data secure.

 

Appendix A: Table 1 – Database Table Value Points

Table Value (SS) Value (TR) Value (KZ) Value (BR) Value

Avg.

Use in Requirement #
allergies 20 15 1 20 20  1,3
cptcodes 1 10 3 3 3 2,3
hospitals 5 5 5 5 5 1,2,3
icdcodes 5 20 5 5 5 1,2,3
labprocedure 13 70 40 40 40 2,3,4
loginfailures 40 40 20 20 40 3,4
ndcodes 1 50 3 3 3 1,2,3
officevisits 8 4 20 8 8 2,3,4
ovdiagnosis 20 60 40 20 20 1,2,3
ovmedication 40 3 13 20 40 1,3
ovprocedure 1 30 2 1 1 2,3,4
ovsurvey 1 2 1 1 1 2,3
patients 100 80 100 100 100 1,3,4
personalhealthinformation 100 40 40 100 100 1,3,4
personnel 100 90 100 100 100 2,3,4
transactionlog 13 1 20 20 20 4
users 20 100 100 40 100 3,4
longtermdiagnosis n.d. 40 1 n.d. 40 1,3
shorttermdiagnosis n.d. 60 3 n.d. 60 1,3

 

 

 

Appendix B: Table 2 – Database Tables Used by Requirement

Requirement Table(s) Used (Consensus) Average Value Points of Each Table Max Value Average
1:  Add role:  emergency responder. Allergies

hospitals

icdcodes

longtermdiagnosis

ndcodes

ovdiagnosis

ovmedication

patients

personalhealthinfo shorttermdiagnosis

20

5

5

40

3

20

40

100

100

60

100
2:  Find qualified licensed health care professional. Cptcodes

Hospitals

Labprocedures

icdcodes

ndcodes

Officevisits

ovdiagnosis

Ovprocedure

Ovsurvey

Personnel

3

5

40

5

3

8

20

1

1

100

100
3:  Update diagnosis code table. Allergies

Cptcodes

Hospitals

Icdcodes

Labprocedure

Loginfailures

Ndcodes

Officevisits

Ovdiagnosis

Ovmedication

Ovprocedure

Ovsurvey

Patients

Personalhealthinformation

Personnel

Transactionlog

Users

Longtermdiagnosis

shorttermdiagnosis

20

3

5

5

40

40

3

8

20

40

1

1

100

100

100

20

100

40

60

100
4:  View access log.  Labprocedure

loginfailures

officevisits

patients

Personal health info

personnel

transactionlog

users

 

 

40

40

8

100

100

100

20

100

100

 

 

 

Appendix C: Table 3 – Security Risk

Requirement Ease of Attack Points (Average) Average Max Value of Asset Points Security Risk Rank of Security Risk
1:  Add role:  emergency responder. 39.3 100 3930 2 (based on higher ranking average)
2:  Find qualified licensed health care professional. 18.6 100 1860 4
3:  Update diagnosis code table. 37.15 100 3715 3
4:  View access log.  63.5 100 6350 1


 

References

Aitoro, J. (2008). Identity Management. Retrieved from: http://www.nextgov.com

Angepat, M., & Chandran, S. P. (2012, October 27). Cloud Computing: Analysing the risks involved in cloud computing environments. Retrieved July 29, 2012, from Cloud Computing: School of Innovation, Design and Engineering: www.idt.mdh.se/kurser/ct3340/ht10/…/16-Sneha_Mridula.pdf

 

Bounos, M. (n.d.). Evaluating computer assisted coding systems & ICD-10 readiness. Wolters Kluwer Law& Business.  Retrieved from http://www.mediregs.com/files/1007-1/WKLBEvaluatingCADICD10.pdf

 

Centers for Disease Control and Prevention. (2012). International classification of diseases, tenth revision clinical modification.  Classification of Disease, Functioning, and Disability.  Retrieved from http://www.cdc.gov/nchs/icd/icd10cm.htm

 

Cimpanu, C. (2009, December 10). Zeus Botnet Infiltrates Amazon’s Cloud. Retrieved July 29, 2012, from Softpedia: http://news.softpedia.com/news/Zeus-Botnet-Infiltrates-Amazon-s-Cloud-129438.shtml

 

Cloud Security Alliance. (2010, February 24). Top Threats to Cloud Computing V1.0. Retrieved July 29, 2012, from Cloud Security Alliance: http://www.cloudsecurityalliance.org/topthreats/csathreats.v1.0.pdf

 

DeFrangesco, R. (2009). Identity and Access Management as an Audit Tool. Retrieved from: http://www.itbusinessedge.com

 

Department of Health & Humans Services. (2006). Emergency responder electronic health record. Officer of the National Coordinator for Health Information Technology.  Retrieved from healthit.hhs.gov/…/EmergencyRespEHRUseCase.pdf

 

Gruman, G., & Knorr, E. (2012, February 29). Retrieved July 29, 2012, from Cloud Computing: Info World: http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031

 

Katsumata, P., Hemenway, J., & Gavens, W. (2010). Cybersecurity risk management. The 2010 Military Communications Conference – Unclassified Program. Retrieved from http://202.194.20.8/proc/MILCOM2010/papers/p1742-katsumata.pdf

 

Kate. (2011, June 7). Securing Your Data In the Cloud: An Insiders Perspective. Retrieved July 29, 2012, from Kate’s Comments: http://www.katescomment.com/securing-data-in-the-cloud/

 

Mimoso, M. S. (2010, March 1). Cloud Security Alliance releases top cloud computing security threats. Retrieved July 29, 2012, from Tech Target: Search Cloud Security: http://searchcloudsecurity.techtarget.com/news/1395924/Cloud-Security-Alliance-releases-top-cloud-computing-security-threats

 

McDonald, S. (2002, April 8). SQL Injection: Modes of attack, defense, and why it matters. Retrieved July 28, 2012, from Sans.org: http://www.sans.org/reading_room/whitepapers/securecode/sql-injection-modes-attack-defence-matters_23

 

Newman, R. (2009). COMPUTER SECURITY: PROTECTING DIGITAL RESOURCES. Sudbury, MA: Jones and Bartlett Publishers International.

 

Polito, J. M. (2012). Ethical Considerations in Internet Use of Electronic Protected Health Information. Neurodiagnostic Journal, 52(1), 34-41.

 

Seigneur, J-M. & Maliki, T. (2009). Identity Management. In Vacca, J.R. (Ed.), Computer and information security handbook. Boston, MA: Morgan Kaufmann Publishers.

 

Slattery, B. (2009, January 21). Heartland Has No Heart for Violated Customers. Retrieved July 29, 2012, from PC World: http://www.pcworld.com/article/158038/heartland_has_no_heart_for_violated_customers.html

Sommer, P., & Brown, I. (2011). Reducing systemic cybersecurity risk. Organisation for Economic Cooperation and Development. Retrieved from http://papers.ssrn.com

 

Spurzem, B. (2006). Sarbanes-Oxley Act (SOX). Retrieved from: http://searchcio.techtarget.com

 

Thacker, S. (2003). HIPPA privacy rule and public health. Center for Disease Control and Prevention.  Retrieved from

http://www.cdc.gov/mmwr/preview/mmwrhtml/m2e411a1.htm

 

Trend Micro. (2011, August 23). Security Threats to Evolving Data Centers. Retrieved July 29, 2012, from Virtualization and Cloud Computing: www.trendmicro.com/cloud…/rpt_security-threats-to-datacenters.pdf

 

U.S. Department of Health and Human Services. (n.d.). Health Information Privacy. Retrieved from: http://www.hhs.gov

 

University of Maryland University College. (2011). CSEC 610: Cyberspace and Cybersecurity, Interactive Case Study II. College Park, MD, USA.

UMUC. (2012). Module 9: Virtualization and Cloud Computing Security. Adelphi, MD, USA. Retrieved July 23, 2012, from http://tychousa5.umuc.edu/cgi-bin/id/FlashSubmit/fs_link.plclass=1206:csec630:9042&fs_project_id=389&xload&ctype=wbc&tmpl=csecfixed&moduleselected=csec630_09


[1] “Platform as a Service (PaaS) is a way to rent hardware, operating systems, storage and network capacity over the Internet” (TechTarget, PaaS, 2012).

[2] “Software as a Service (SaaS) is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network” (TechTarget, SaaS, 2012).

[3] “Infrastructure as a Service is a provision model in which an organization outsources the equipment used to support operations, including storage, hardware, servers and networking components” (TechTarget, IaaS, 2012).

, , ,

Leave a comment

Protection of Network Operating Systems

 

 

 

 

 

Protection of Network Operating Systems

Amy Wees

CSEC630

15 July 2012

 

Abstract

 

Operating systems are essential to business operations, system security and software applications. Users count on operating systems to provide easy to use graphical user interfaces (GUI), operate multiple applications at one time, and store and access data and information needed for everyday operations (UMUC, 2011).  Businesses count on operating systems to address and provide for the four basic security concerns of confidentiality, integrity, availability and authenticity (Stallings, 2011).  Although many operating systems have incorporated controls to address these security concerns, there are additional measures that need to be taken to ensure the necessary level of security is achieved. Identification and Authentication protection measures are the most significant measures to implement.  Before a user or administrator is allowed to access the system, security measures must be implemented to identify and authenticate the need for and level of access.  After personnel are identified and authenticated, access control policies must be implemented to ensure limited access to applications, information, computers and servers on the network.  Internal to external and external to internal communications must also be protected and restricted.  Drafting and enforcing effective security policies and conducting annual audits allows for vulnerability assessment and correction of weaknesses in configuration, training, or procedures.  The protection measures noted in this paper are rated in severity based a case study on auditing UNIX systems by author Lenny Zeltzer (2005).

 

 

 

Keywords: firewalls, security training, operating systems, security policies, password, access control, security management

Protection of Network Operating Systems

Operating systems are essential to business operations, system security and software applications.  Operating systems allow administrators to control access to the system, install and configure third party commercial-off-the-shelf (COTS) software and monitor activity with built in auditing tools.  Users count on operating systems to provide easy to use graphical user interfaces (GUI), operate multiple applications at one time, and store and access data and information needed for daily operations (UMUC, 2011).  Businesses count on operating systems to address and provide for the four basic security concerns of confidentiality, integrity, availability and authenticity (Stallings, 2011).  Although many operating systems include built in controls to address these security concerns, additional measures should be taken to ensure the required level of security is achieved.  This paper will address the implementation, advantages and disadvantages, and security management issues of three protection measures: Identification and Authentication, Access Control, and Security Policies and Auditing (Information Assurance Directorate, 2010).

 

Security Ratings and Prioritization

            The protection measures noted in this paper are rated in severity based a case study on auditing UNIX systems by author Lenny Zeltzer (2005).  A high severity rating is one in which could result in an attacker or intruder gaining root level access to a system leading to potential loss of critical data.  A medium severity rating is given to vulnerabilities that could result in remote nonprivileged access to the system.  A low security rating is that related to events which are improbable and may result in a local attacker gaining nonprivileged access to the system (Zeltzer, 2005).  The measures listed in this paper are rated as follows:

Measure Rating
Identification and Authentication protection measures High
  1. Badge Access Control System
High
Access Control High
  1. Host Based Firewall
Medium
  1. Network Firewall
High
  1. Use of a DMZ
Medium
  1. Limiting Access to Data using Least Privilege & Separation of Duty Principles
Medium
  1. Enforcing strong password policies
High
Security Policy Medium
  1. Drafting Effective Security Policies
Low
  1. Security Awareness Program
Low
  1. Security Auditing
Low

 

 

Identification and Authentication

            Identification and Authentication protection measures are the most significant measures to implement.  Before a user or administrator allowed access to the system, security measures must be implemented to identify and authenticate the need for and level of access.  Pre-employment background checks can prevent organizations from hiring individuals with criminal records and verify qualifying information on a candidate’s resume (Mallery, 2009).  A popular method for controlling identification and authentication is by utilizing access badges.  Access badges can be linked to security systems and control and monitor physical access to the facility, to rooms within the facility and most importantly logical access to the systems that contain proprietary sensitive information.  Access badges also provide employees a visual tool for monitoring levels of access, job titles, and recognition of visitors.

Today many different types of access badge systems are available.  An organization must weigh the cost of the system with the benefit to security.  Smart card systems are relatively easy to implement offering a multitude of vendors and interoperability with legacy systems.  After the user has verified his or her identity using a passport or drivers license and a representative of the company has verified the users’ required level of access, the user can be issued a smart card where he or she sets a pin number to be used from that point forward to verify his or her identity and authentication for physical and logical access into the facility (Smart Card Alliance, 2003).

Management of the system will require information assurance professionals who can conduct background checks and verify identities as well as control and administer the computer applications associated with the system.  The organization will also need to prepare for possible outages of the system and develop procedures for training employees to identify badges, escort unauthorized individuals and properly wear, use and store badges.

Utilizing smart card technologies removes the need to verify identify on a daily basis and also allows for ease of monitoring of a person’s whereabouts.  Access changes can also be made remotely from the management software application if an employee switches jobs, loses their badge, or leaves the company.  Smart cards can be used for physical and logical access and such access can be limited throughout the facility.  Smart cards can also limit the number of passwords an employee has to remember, decreasing man hours spent on password resets and locked out systems.  Although the advantages are many, access badge systems can be costly, and a strong social engineer may be able to outsmart the system by replicating a badge or fooling an employee in to granting them access they should not have.

 

Access Control

            The second most critical security measure is access control.  After personnel are identified and authenticated, access control policies must be implemented to ensure limited access to applications, information, computers and servers on the network.  Internal to external and external to internal communications must also be protected and restricted.

A firewall is one of the best mechanisms to protect the network from internal and external threats and control as well as monitor communications.  The Windows operating system offers an integrated firewall for use on clients, which drops incoming solicited traffic that is not in response to a request made by the computer, and allows specified unsolicited traffic. Host based firewalls such as the Windows Firewall safeguard against threatening applications that utilize unsolicited traffic as an attack mechanism (Microsoft , 2012).  The network firewall should be attached directly to the internet connection, to block malicious traffic from entering the network.  Network firewall software can be installed on a dedicated server located between the internet and the protected network (Goldman, 2006).  Firewalls can filter and monitor incoming traffic and also protect against insider threats such as users clicking on phishing e-mail links or navigating to dangerous websites.  Goldman (2006) notes research shows that seventy to eighty percent of malicious activity comes from insiders who already have network access.  Although firewalls are an advantageous method of protection, they can cause more damage if not configured properly or if maintained by administrators that do not understand the complex rules or monitoring procedures.  Firewalls must also be combined with other protection strategies such as vulnerability assessment tools, intrusion detection and prevention systems and antivirus tools (Goldman, 2006).

The physical location and configuration of assets on the network is also vital to access control on the network.  For example, a demilitarized zone or DMZ is a controlled area for the most vulnerable systems on the network.  If a user is hacked or a system is infected the DMZ prevents interruption of essential functions such as e-mail and databases (Turner, 2010).

Password, user and administrative access policies are equally essential to protecting the network and clients from outside and inside threats.  The level of access a user requires must first be determined using the principle of least privilege.  Files and information should be separated by roles or departments within an organization and access given only to those assigned in those roles or associated with that department.  Limiting data access also decreases the possibility of an intruder gaining access to critical files.  Administrative access should also be limited to the roles and responsibilities of the administrator.  Full administrative access to the network should be given on an extremely limited basis following a separation of duty policy.  Password policies should be understood by all users and administrators, and Windows active directory configured to enforce policy.  Studies have shown the most secure password policies are those that require a 14 character password comprised of at least two uppercase and lowercase letters, two numbers, and two unique characters.  Passwords should be changed every 60 days and screen saver passwords enforced to prevent intruders from accessing open systems (Turner, 2010).  An excellent prevention and education measure to enforce the use of strong passwords is to run a password cracking application such as L0phtcrack against the password database using a keyboard progression dictionary often used by crackers.  If passwords are cracked, users should be notified and forced to change their passwords.  Training in this way helps users and administrators learn to create and maintain strong passwords, and understand how easily weak passwords can be exploited for malicious purposes.

 

Security Policies and Auditing

The likelihood of a business falling victim to cyber-attack becomes more prevalent as more and more businesses utilize technology to conduct operations and store critical information.  Attacks can cause severe financial losses to businesses and customers and destroy reputations.  Research has shown that most security breaches are not due to misconfiguration of firewalls or poor password policies, but caused by inadequate security planning (Hamdi, Doudriga, & Obaidat, 2006).  Drafting and enforcing effective security policies and conducting annual audits allows for vulnerability assessment and correction of weaknesses in configuration, training, or procedures.  The security policy should be based on business objectives and detail security measures for information systems, operating systems, and key management in the business environment and document procedures for handling security incidents.  Security policies can also be multifaceted and separated by audience (such as technical versus end-user policies), or separated by issues (such as information classification and access control policies).  At a minimum, the security policy should address access privileges, user accountability and responsibility, authentication procedures, availability and maintenance of resources, and procedures for reporting violations (Hamdi, Doudriga, & Obaidat, 2006).

Enforcing securing policies requires awareness programs and employee training.  Employees should feel they are stakeholders in the security of the organization.  Policies should be widely disseminated, easy to understand and follow, and retrained on a regular basis.  Employees should know how to recognize and respond to security incidents.  The effectiveness of a security policy can be assessed using simple tests such as a contingency plan or emergency response practice drill (Hamdi, Doudriga, & Obaidat, 2006).

Conducting regular vulnerability assessments and audits of an organization’s security posture will help to ensure weaknesses in operating systems, third party applications, and security policies are identified.  This is best accomplished by hiring a third party to conduct an audit.  Security professionals are trained on many different systems and can educate staff on vulnerability management.  Audits can include penetration tests, which can assess the external security of the network, or a less invasive vulnerability assessment to scan the system for threats and provide fix actions (Mallery, 2009).  If the organization decides not to outsource the audit, there are other options for scanning the network using tools such as the Nessus vulnerability assessment tool as well as employing intrusion detection and prevention systems and antivirus.  The benefits of utilizing in-house tools are that they are always available and can often automatically assess and mitigate vulnerabilities.  The drawbacks are that employee training to maintain such systems can be extensive, and systems can be costly (Kakareka, 2009).  After audits are conducted it is paramount to set a time frame in which to accept risks, remedy vulnerabilities, and update security policies and other relevant documents.

 

Conclusion

            Businesses rely on network operating systems as an effective way to control, manage and secure their operations with ease.  Effective security of operating systems requires a defense in depth strategy that goes beyond what is inherent to the operating system.  Businesses must identify and authenticate employees using background checks, physical security procedures such as badging systems.

After identification and authentication, access to assets is best controlled using the principles of least privilege and separation of duties.  User and administrator access to shared electronic data folders and applications should be separated and limited by function or role.  Firewalls, DMZs and physical separation of assets can be utilized to protect the network from unwanted incoming and outgoing traffic and malicious actors.  Strong password policies and practices can also assist in protecting the network and preventing unauthorized access.

Finally, drafting a strong security policy based on risk analysis and business objectives and confirming employees have a clear understanding of policies and procedures will go a long way in developing a security culture in the organization.  Conducting periodic audits will ensure policies are updated and put into practice.

 

           

 

 

 

 

 

 

 

 

 

References

 

Goldman, J. (2006). Firewall Basics. In H. Bidgoli, Handbook of Information Security (pp. 2-14). Hoboken: John Wiley & Sons, Inc.

Hamdi, M., Doudriga, N., & Obaidat, M. (2006). Security Policy Guidelines. In H. Bidgoli, Handbook of Information Security (pp. 227-241). Hoboken: John Wiley & Sons, Inc.

Information Assurance Directorate. (2010). US Government Protection Profile for General-Purpose Operating Systems in a Networked Environment. Information Assurance Directorate. Retrieved from http://www.niap-ccevs.org/pp/pp_gpospp_v1.0.pdf

Kakareka, A. (2009). What is Vulnerability Assessment? In J. Vacca, Computer and Information Security Handbook (pp. 383-393). Boston: Morgan Kaufmann Inc.

Mallery, J. (2009). Building a Secure Organization. In J. Vacca, Computer and Information Security (pp. 3-21). Boston: Morgan Kaufmann Inc.

Microsoft . (2012). Windows Firewall. Retrieved from Microsoft Technet: http://technet.microsoft.com/en-us/network/bb545423.aspx

Smart Card Alliance. (2003). Using Smart Cards for Secure Physical Access. Princeton Junction: Smart Card Alliance. Retrieved from http://www.smartcardalliance.org/resources/lib/Physical_Access_Report.pdf

Stallings, W. (2011). Operating Systems Security. Handbook of Information Security, 154-163.

Sensei Enterprises, I. (Director). (2010). How do I secure my computer network? [Educational Video]. Retrieved from http://www.youtube.com/watch?v=g_xzh1rqkNs&feature=youtube_gdata_player

UMUC. (2011). Prevention and Protection Strategies in Cybersecurity. Adelphi, MD, USA.

Zeltzer, L. (2005). Auditing UNIX Systems: A Case Study. Retrieved from Lenny Zeltzer: http://zeltser.com/auditing-unix-systems/#prioritizing

, ,

Leave a comment

Company Cybersecurity Policy

Firion Corporation Cyber Security Policy

Amy Wees, Gary Coulter, Kyree Clarke, and Leonard Gentile

University of Maryland University College

Author Note

Amy Wees, Gary Coulter, Kyree Clarke, and Leonard Gentile, Department of Information and Technology Systems, University of Maryland University College.

This research was not supported by any grants.

Correspondence concerning this research paper should be sent to Amy Wees, Gary Coulter, Kyree Clarke, Leonard Gentile, Department of Information and Technology Systems, University of Maryland University College, 3501 University Blvd. East, Adelphi, MD 20783. E-mail: acnwgirl@yahoo.com, garyccoulter@gmail.com, kclarke61980@yahoo.com, and dragnard@yahoo.com

Abstract

The Firion Corporation is a leader in the development of specialized safety outerwear and has a niche market in the waste disposal, chemical, and biological industries. Firion employees use technology in every aspect of the business.  Databases contain private customer information, unique software assists in development and testing of proprietary designs, and marketing, financial, and sales data are accessed and stored on our private network.  Protection of information is mandated by Firion policy and federal and state legislation.  Unauthorized access to the network by cyber criminals or malicious insiders can result in loss of customer information, compromised proprietary business information, severe financial damage, and work outages.  Cyber security threats and vulnerabilities can have a detrimental impact on the future of our business and every employee is considered a stakeholder in the protection of the network.  Firion will continue to strive to ensure cyber security remains a priority at every level of the company. The goals of Firion’s cyber security policy include increasing awareness by providing employees with applicable illustrations of common threats and vulnerabilities in the industry, identifying data classification procedures and rationalize access control rules, and characterizing sensitive and critical systems and outlining their appropriate safeguards and utilization.

Firion Corporation Cyber Security Policy

Firion Organizational Business Mission

Welcome to Firion!  The Firion Corporation is a leader in the development of specialized safety outerwear and has a niche market in the waste disposal, chemical, and biological industries.  Our customers count on us to deliver quality products that are safe and reliable.  Firion laboratories are constantly at work developing innovative coatings and unique designs to ensure our customers can be confident in the level of protection our products deliver (UMUC, 2010).

Firion’s employees use technology in every aspect of our business.  Databases contain private customer information, unique software assists in the development and testing of proprietary designs, and marketing, financial, and sales data are accessed and stored on our private network.  Protection of information is mandated by Firion policy and federal and state legislation.  Unauthorized access to the network by cyber criminals or malicious insiders can result in loss of customer information, compromised proprietary business information, severe financial damage, and work outages.  Cyber security threats and vulnerabilities can have a detrimental impact on the future of our business and every employee is considered a stakeholder in the protection of the network.  Firion will continue to strive to ensure cyber security remains a priority at every level of the company.

Cyber Security Goals

Firion’s cyber security policy will be kept relevant and up-to-date to the technology in use.  The policy will be communicated to employees on an annual basis to ensure compliance, comprehension, and clarity.  The goals of the cyber security policy are as follows:

·         Increase awareness by providing employees with applicable illustrations of common threats and vulnerabilities in the industry

·         Identify data classification procedures and rationalize access control rules

·         Characterize sensitive and critical systems and outline their appropriate safeguards and utilization

  • Address physical security as the first line of defense in a defense-in-depth security strategy to include use of personal computing devices on corporate networks and business devices on the road
  • Ensure all employees understand their role in business continuity and disaster recovery
  • Explain acceptable use of technologies as well as applicable federal and state legislations
  • Corporate privacy policies will disclose what information is collected and how information is utilized and stored
  • All e-mails sent over Firion networks are subject to monitoring.  Employees are expected to conduct business communications in a professional manner, limiting e-mail sent for personal use
  • Employees will not present themselves as representatives of Firion outside of corporate  functions nor use their professional title in public online forums
  • Internet usage is monitored, and employees are expected to use corporate Internet for business and limited personal use purposes.  Certain Internet Web sites are blocked if considered a threat to the network or not necessary for business practices
  • Employees must use software issued and approved by Firion.  Unlicensed software or freeware is not authorized for use on corporate assets.  Exceptions to this policy can be granted by Firion’s Information Assurance manager
  • Personal computing devices and mobile telephones are not authorized on corporate property.  Employees will be provided with lockboxes for securing their valuable items
  • All company issued mobile computing devices are subject to auditing and virus scanning prior to being connected to corporate networks
  • Employee passwords will not be shared with anyone or recorded.  Passwords must meet minimum complexity requirements and change every 90 days
  • Ethical computing concerns can be brought to Firion’s Information Assurance manager for consideration or evaluation at any time

Computing Ethics

Ethical practices are about doing the right thing when no one is looking.  Firion is committed to preserving a reputation for sound ethical computing practices.  Though Firion will take every precaution to protect employee and customer private data located on its systems, it is important to understand that no system is 100 percent secure.  Employees and network users can contribute to network security and information privacy by following ethical guidelines:

 

Cyber Security Policy Introduction

Cyber security is essential for just about any organization, including Firion.  One of the reasons why it is so vital to ensure that computer networks and systems within an organization are secure is that cyber criminals both inside and outside the organization pose a serious security threat to businesses.  In order to protect against the threat of cyber attackers, whether they are inside or outside the organization, Firion has developed a policy that describes how it intends to secure its computer networks and systems.

It is not enough for Firion to simply develop a cyber security policy and sit back, thinking that its network and systems will suddenly be secure.  Firion must also ensure that employees understand and comply with the cyber security policy.  This is necessary because Firion’s employees, or employees of any organization for that matter, are the weakest part of the network.  Even the most state-of-the-art cyber security technologies will not be able to protect Firion’s networks and systems from cyber security threats if its employees are engaging in behavior that jeopardizes the security of those networks and systems. Sharing passwords, leaving passwords on Post-It notes for display on their computer monitors, or clicking on links in e-mails that are sent by people that they do not know are all examples of how easily a network can be jeopardized.  In order to prevent these and other behaviors that may open Firion’s network and systems up to cyber security threats, the company must be sure that its employees understand and are complying with the company’s cyber security policy.

By implementing a strong cyber security policy and ensuring that employees understand and comply with that policy, Firion is taking a crucial step in securing the network and its systems from cyber security threats.  In addition, a strong cyber security policy coupled with employee understanding and buy-in will help prevent Firion from experiencing the negative effects of cyber security breaches.  For instance, by protecting its systems from cyber security threats, Firion will also be working to prevent the unauthorized access of information that is stored on its systems, including trade secrets, customer payment information, and any confidential personnel information, such as Social Security numbers.  The loss of such information could have serious consequences for Firion.  The consequences of a competitor obtaining the company’s trade secrets could be very serious, since these secrets form the basis of our business.  In addition, the loss of sensitive information such as employees’ Social Security numbers could result in Firion absorbing the expense of credit monitoring for affected employees, while the confiscation of customer payment information could result in a loss of trust among Firion’s customers.  Customer dissatisfaction can also result in financial ramifications for the company and could cause potential legal liability (Feigelson & Calman, 2010).

Achieving Employee Buy-In for Firion’s Cyber Security Policy

Now that the importance of employee understanding and compliance with Firion’s cyber security policy has been demonstrated, it is pertinent to spell out how Firion plans to achieve the level of employee support and buy-in that is necessary for this cyber security policy to be effective.  Firion will practice a three-pronged approach: education, rewards for compliance, and penalties for non-compliance.

Firion will seek to educate employees about cyber security by requiring them to participate in a Web-based training program when they are hired.  An annual refresher course will also be required for all employees.  Web-based training has proven to be one of the most effective ways to educate employees about cyber security issues (Rudolph, 2009, p. 28).  Web-based courses are an optimal method for training because courses can be taken at any time and are self-paced (Rudolph, 2009, p. 29).  In addition, Web-based courses can be tailored to the needs of employees based in their levels of experience and various interests (Rudolph, 2009, p. 29).

Rewarding or Punishing Employees for Complying or Not Complying with Firion’s Cyber Security Policy

Additional steps will need to be taken to ensure that employees understand and comply with Firion’s cyber security policy.  For example, employees will be required to sign an agreement stating that they understand the policy and that they intend to comply with it. Requiring employees to sign compliance statements is an effective way of making them more security aware and committing them to comply with policies that are put in place to protect Firion’s network and computer systems (Rudolph, 2009, p. 30).

Rewards and punishments are another necessary component of Firion’s efforts to ensure that employees understand and comply with the cyber security policy.  Firion should not take the approach of considering compliance with its cyber security policy a core requirement for employees as this approach has proven to be unsuccessful in the past.  Government agencies, for example, once treated cyber security as a core requirement and did not make an attempt to give it special emphasis (Rudolph, 2009, p. 8). These agencies eventually began to suffer from a growing number of security breaches (Rudolph, 2009, p. 8).  Firion should not and cannot make the same mistake that these government agencies did.  We at Firion recognize that security needs to be an area of special concern that is emphasized frequently so that our network and systems can be properly protected from cyber security threats (Rudolph, 2009).  In order to emphasize security as a special area of focus, employees will be given rewards for complying with Firion’s cyber security policy.  These rewards will be given out partly on the basis of informal security audits performed by members of Firion’s information technology (IT) security department.  Once a month, a member of Firion’s IT security department will walk around the company’s office and observe employee behavior, such as whether or not passwords are written on Post-It notes and visible in the work area as well as whether or not computers are powered on and logged in while employees are away from their desks.  Employees who are found not to be engaging in these and other behaviors will be given a small reward, such as a gift card to a local retailer or restaurant or a small cash bonus.  Rewards will also be given out to the company as a whole based on company-wide compliance with the cyber security policy.  For example, all employees can be rewarded with some type of perk if the number of cyber security incidents declines on a quarterly or yearly basis since this would likely be an indication that employees understand and are complying with Firion’s cyber security policy.  Such perks could include breakfast for the employees, paid for by Firion.  Conversely, employees who are found to be violating Firion’s cyber security will be punished.  This punishment will be based on the severity of the violation, with the most serious violations resulting in termination and potential legal implications.  The severity of a violation will be determined by Firion’s Chief Information Officer (CIO).

In addition, compliance with Firion’s cyber security policy will be one of the areas that managers will consider when conducting annual performance reviews. Employees who are found not to have violated Firion’s cyber security policy over the past year will be given a monetary bonus.  Those who are found to have violated Firion’s cyber security policy over the past 12 months will be punished.  This punishment could include the loss of vacation time or other perks. The type of punishment that is given will be decided on a case-by-case basis, though more severe violations will warrant a more severe punishment.  Once again, the severity of a violation of Firion’s cyber security policy will be determined by the CIO.

Procedures for Reporting Security Breaches, Violations of Cyber Security Policy, and Security Vulnerabilities

All employees are required to report security breaches, violations of Firion’s cyber security policy, and security vulnerabilities that they are aware of.  As soon as employees become aware of any security breach, cyber security policy violations, and/or security vulnerabilities, they should immediately notify an IT systems administrator and provide any information that they may have.  This information can include the name of the person who is involved in the cyber security breach or policy violation, the system that contains the security vulnerability, or the system that has been breached, among other things.  Immediate notification will allow Firion’s IT security department to take action on any urgent issues that arise.  By urging employees to report any information that they have about the nature of a security breach, policy violation, or security vulnerability, the IT security department will be able to determine whether or not the issue requires immediate attention.  Any reports that are deemed to be legitimate will be investigated by the IT security department.  The time frame of such an investigation will depend on the seriousness of the security breach, policy violation, or security vulnerability.  After the conclusion of the investigation, the IT security department will address the issue in an appropriate manner.  This includes correcting the security vulnerability, reporting the employee who was found to have violated Firion’s cyber security policy, and taking steps to end the security breach.

Awareness and Information Security

Employees of Firion pride themselves on the quality of the jackets the company produces, the safety these products provide, and the science that goes into making Firion a cutting edge company.   That pride can have negative effects on the company and its future business.  Because Firion is a cutting edge company, special attention must be applied to the security of its physical and intellectual assets.  This intellectual property is not just what might be considered a secret formula, or an important release date, but can include small pieces of information that could easily be incorporated into a much larger piece.  At Firion we call this desire to be cognizant of information, its use, and how it is protected “Information Security” (Information Security, n.d.).

Many individuals may desire to gain access to information that Firion owns for a variety of reasons.  These actors may desire to access to the company’s systems for personal profit or to gain additional information about Firion’s scientific developments in order to further their own research or to sell the information to competitors.  It is also possible that an actor may be disgruntled with Firion and seek to cause harm to the company as a whole (Campbell & Kennedy, 2009).

These actors can be blunt and seek to gain information directly from an employee.  More likely the actor will lie, cheat, steal or apply subterfuge in order to obtain the information they desire.  It is essential that employees are aware that these actors are present, as knowing a threat exists is the first step in being able to create a defense (Voiskounsky & Smyslova, 2003).

In order to protect the employees of Firion, there are a number of procedures in place to prevent the deliberate or inadvertent sharing of company information.  It is preferred that employees of Firion do not act as representatives of the company on either public or private forums unless their job duties entitle them to be public relations representatives.  This protects not only Firion by assuring that company data is shared in a controlled fashion, but also protects the employee so they do not become a target for any derogatory information that may be reported against the company.

Employees at Firion, depending on duties, are asked to sign non-disclosure agreements.  These agreements are written to protect especially sensitive information.  They are legally binding and allow for Firion to maintain control of its company-based intellectual property and are enforced under U.S. federal law.  The Economic Espionage Act of 1996 is designed not only to protect company’s secrets from being sold to a foreign power, but to protect the sale of corporate secrets in total.  Under this law, any individual who discloses a trade secret to the economic benefit to anyone other than the owner of that secret can be imprisoned for not more than 10 years, or face up to $5 million in fines (44 USC § 3542, 2002).

Data Classification and Access Control

Data is a critical asset at Firion.  Beyond the day-to-day production of protection equipment, the company has thousands of employees who have provided private, economic, and health based information to the company.  This data is just as critical to protect as any company secret.   All employees are responsible for information security.   As such the company has instituted a series of data classifications to help guide employees as to how data should be treated both inside and outside of Firion.

This classification of data is designed to be a tool to help employees protect critical information from being disclosed to illicit actors.  These actors could utilize this data to further their own economic or personal goals (Woodbury, 2007).  Firion classifies data into four separate categories: public, official use only, confidential, and secret.

Public data is that which is made publically available from the company.  This type of data can include company produced brochures, pamphlets, or catalogs.  It may also include publically available press releases as approved or issued from Firion’s public affairs branch.  Finally, it includes any and all interactive, publically-available data that may reside on the company Web site.

Official use only data is content that must be guarded due to ethical or privacy concerns.  It must be protected from access, modification, transmission, storage or any other use other than what has been authorized by Firion.  This data type is restricted to employees of Firion and should not be shared outside of the company.  This information can include employment data, company phone books, internal e-mails, or internal memos and should be stored in protected forms of physical and electronic storage.  Official use only information should not be posted or shared in public forums to include both physical and electronic mediums.  When it is no longer needed it should be destroyed, shredded, or sanitized.

Confidential data is contractual or protected by statutes or regulations.  This type of data is only disclosed to individuals on a need-to-know basis.  The disclosure of this data can only be authorized by the company president, vice president, or board of governors.  Examples of this type of data may include medical records, Social Security numbers, personnel and payroll records, bank account numbers, personal financial information, and any data that is identified by government regulation to be treated as protected data.  This data should only be stored in a physically locked container or in a password-protected electronic format.  It should not be disclosed without explicit management authorization and must not be published in any public forum.  Finally, confidential data can only be destroyed by shredding or if in electronic format, sanitized and degaussed prior to disposal.

Secret data is information that if released could potentially damage Firion or lead to substantial loss of economic standing.  This data shall never be disclosed outside of the company.  Individuals who may have access to this data shall be under the non-disclosure agreement, which will legally bind them not to disclose this information.  Examples of this data may include current internal economic statistics, protected manufacturing techniques, or on-going negotiation information.  This data should only be stored in authorized systems that are separate and protected from day-to-day systems.  All data on this system should be protected by a strong password at a minimum.  This information should never be shared, printed, or created into a physical form.  Destruction of this data must be through an authorized electronic format that includes sanitization and degaussing of magnetic materials.

Data classification is designed to ensure Firion is in compliance with a number of federally mandated laws.  All health related information is required to be protected by the Health Insurance Information Portability and Accountability Act (HIPPA) (HHS, 2003).  The Privacy Act of 1974 guarantees the protection of personal information (5 USC § 552A, 1974).  Financial data is regulated, protected, and managed based upon the Sarbanes-Oxley Act of 2002 (Public Law 107-204, 2002).  Finally, company secrets are protected under the Economic Espionage Act of 1996 (44 USC § 3542, 2002).

Sensitive and Critical Systems

Because of the importance of data at Firion there are many different types of authorized systems utilized inside the company.  These systems can include the computers that individuals use on a day-to-day basis, the laptop that a team uses when it travels to create a presentation for a potential customer, the Blackberry that an executive receives e-mail on, or the closed network computer that individuals utilize while working on proprietary data.

These systems are increasingly vulnerable to potential attack or intrusion by an ever-growing community of qualified people with the intent to steal data.  These actors may seek access to these systems for monetary gain for themselves or the company they work for, they may have personal reasons for seeking out data in Firion’s systems, or they may desire to destroy Firion’s capabilities from the inside (Verduyn, 2005).  These actors can use a number of vectors to access Firion’s systems, including direct attacks from an external network source such as the Internet, a virus spread from a Universal Serial Bus (USB) drive, or utilization of pirated or unauthorized software as a cover to gain access.  These actors are smart and will utilize any and all potential avenues to gain access to Firion’s systems.

Because of these vulnerabilities, Firion has instituted a strict policy concerning utilization of systems.  Personal systems, capabilities, or software are never to be used on or with company owned networks, systems, or software.  It is unacceptable for employees to have USB drive, wireless devices, or personal electronics in the work place.  No item is to be put in contact (wired or wireless) with a company owned system until such time as it is scanned and authorized by a qualified company network systems administrator.  Also no company system will be allowed to connect to an unauthorized system outside of the company network architecture without the authorization of a system administrator and information assurance manager.  Finally, any and all systems that are utilized outside of the company network will be audited as soon as they are returned to a company workspace and before they can be utilized on a company owned network architecture.

This regulation enables Firion to continue to be in compliance with the Sarbanes-Oxley Act of 2002.  This act mandates that companies continue to maintain internal controls, specifically for financial information (Public Law 107-204, 2002).  By assuring control of all systems within Firion and protecting those systems the company is able to assure that all financial data is secure.

 

Physical Security

Firion can prevent or counter some security mishaps by simply being proactive when it comes to the company’s physical security.  Physical security relates to any device that is used to protect or prevent inside or outside threats from damaging an organization’s proprietary information, networks, or assets.  If properly mandated, hackers and employees alike have less of a chance to infiltrate a system with malicious intent.  Performing regular surveys to access exactly what Firions’ needs are regarding security allows management to see the threats and vulnerabilities faced by the company aside from human factors as well as the positive enforcement that is already in place.

With the amount of activity and people involved with the day to day operations on-site, it is mandatory for a company that deals with so many outside sources to have a strict entry and exit policy.  Starting from the outside of the building, the physical security program includes guards that approve the entry of vehicles, specific identification badges that show each employee, contractor, or vendor access privileges and expiration dates, parking passes that correspond to specific cars as well as posts and patrols that are actively involved with patrolling their assigned area (U.S. Department of Education, 2008).

Firion will be proactive in securing its buildings so that the chances of unwanted guests or cybercriminals gaining access to the property are lessoned.  A gate that is occupied by a guard will keep track of who is entering and exiting the facility and the company will also record these interactions on surveillance cameras.  Once a person has been approved to enter the facility, access badges with proper identification will categorize exactly what access the person has and where he/she can go throughout the building.  It is pertinent that Firion keep up to date with security compliance so that all individuals holding a badge are documented, recorded as they scan through turnstiles and are promptly revoked access after their badge has expired or after they have been terminated.  Employees are also required to register their vehicles once they are given access to enter the facility as a way to keep track of vehicles that enter the premises without being overly burdensome.  Marked parking passes eliminate extra work for the security guards and patrols as their attention can be focused more on visitors and other vehicles that are new to the building or making drop-offs.

With these procedures in place, threats and vulnerabilities associated with physical security are lessened.  Employees will not have access to areas that do not relate to their job functions nor will they be able to enter certain parts of the facility during the day or after hours without their badge being scanned.  Once scanned, a log is kept to track exactly where they are located in the building and how long they remain before entering a new section.  Employees will also utilize access badges when logging into computers as level of computing privileges and information access is stored on each individual badge.  A simple user will not have the same access privileges as an administrator and will ultimately not able to modify any settings on their computer or be able to download any unlicensed software that may unknowingly harm the system or network they are connecting to.  By utilizing this mechanism, separation of duties will be clear for employees and they will never have to question if they have certain rights to perform certain actions.

Outside threats and vulnerabilities for employees working while on travel or from home can be a problem if employees do not take necessary precautions.  Employees that have portable laptops should always be cautious when on travel and connecting to other networks or unsecured Wi-Fi.  The Information Technology (IT) department will ensure proper security settings are in place before distributing laptops as well as require users to attend a mandatory training session on what is and what is not acceptable when it comes to downloading software, or using USB and other external devices.

Employees, contractors, and vendors alike must be aware of the acceptable use policy in place at Firion.  Ongoing security awareness training and mandatory continuing education are areas that will help reduce human errors that could contribute to possible security violations and other mishaps.  When the whole company follows proper standards and procedures, it is easier to see where the problem areas rest.  With employees being identified before reaching the building, wearing access badges and locking computers when in not in use, physical security becomes less of a risk to the organization.  Once employees are made aware of how important their role is in making the company more secure and have shown positive reinforcement of some sort, compliance naturally increases.

Data Back-up and Disaster Recovery

In order to recover from a disaster or data-loss incident, Firion will securely back up data on a regular basis depending on the system, and store back-ups at an off-site location.  Firion will have data access control in which archived data can be retrieved without much effort and is readily available when needed.  Storing information (servers, hard drives, or copyrights) at the off-site location is a good way to mitigate threats to security.  Not only is the off-site facility secure, it has a better chance of surviving a natural disaster and is unknown to virtually anyone that works for the company except the specifically identified members of the disaster recovery team.  As such, each team member with access to the off-site facility is recorded and is required to sign in and out when entering and exiting the building; which keeps a running log of who is accessing what and when.

To be sure Firion is able to maintain business continuity; a disaster recovery plan will be regularly updated.  One cannot automatically assume that having a disaster recovery plan means that it will ever be put to use; however, it should be looked to as preventative maintenance.  A company is more apt to survive a disaster when it is prepared for the worst.  Having systems or networks that have been hacked or attacked by malware and/or viruses, normally results in downtime as well as financial loss.  With a recovery plan in place, data is backed up and easily accessible, risk assessments have been periodically given to ensure security policies are sufficient, and government regulations have been taken into consideration.

The threats and vulnerabilities associated with faulty equipment such as the firewall that was not patched with the most up to date software would have been addressed during the initial creation of the disaster recovery plan.  Outsider threats that could potentially damage the organization would be denied and insider threats would be easily detected.  Each member that participates in the disaster recovery plan will have a clear understanding as to what their roles and responsibilities are and have an active role in updating the user community with policies and procedures.

Overall, if employees of Firion stick to the cyber security policy that has been put in place, the company will have a successful track record when dealing with insider and outsider threats. Positive reinforcement, mandatory training, and simply being knowledgeable about security vulnerabilities are all motivating factors for employees to follow process and procedures. The monthly periodic reviews are also a good way to make sure the security policy is being enforced. Although physical security, inside and outside the organization, are definitely key factors when it comes to protecting a company’s assets, the manner in which Firion deals with human factors is what will determine how successful the company will be in mitigating the threat from cyber criminals or malicious insiders.

 

 

 

 

 

 

 

 

 

 


Addendum 1

References

5 USC § 552A.  (1974). Privacy Act of 1974.  Retrieved from http://www.law.cornell.edu/uscode/text/5/552a

44 USC § 3542.  (2002). Economic Espionage Act of 1996.  Retrieved from http://www.law.cornell.edu/uscode/text/44/3542

Campbell, Q.  & Kennedy, D.M.  (2009). The Psychology of Computer Criminals.  Computer and Security Handbook Volume 1, 5th Edition (pp.  12.4-12.8).  Hoboken, NJ: John Wiley & Sons, Inc.

Department of Health and Human Services (HHS).  (2003, May).  U.S.  Department of Health and Human Services: Summary of HIPAA Privacy Rules.  Retrieved from http://www.hhs.gov/ocr/privacy/hipaa/understanding/summary/privacysummary.pdf

Fiegelson, J., & Calman, C. (2010, April). Liability for the costs of phishing and Internet theft. Journal of Internet Law, 13(10), 1. Retrieved from http://www.aspenpublishers.com/

Information Security. (n.d.). definition from PCmag.com Encyclopedia. Retrieved from http://www.pcmag.com/encyclopedia_term/0,1233,t=information+security&i=44958,00.asp

Public Law 107-204: Sarbanes-Oxley Act of 2002.  (2002). Retrieved from http://www.gpo.gov/fdsys/pkg/PLAW-107publ204/content-detail.html

Rudolph, K. (2009). Implementing a security awareness program. In S. Bosworth, M.E. Kabay, & E. Whyne (Eds.), Computer security handbook volume 2, 5th edition (pp. 8, 28-30). Hoboken, NJ: John Wiley & Sons, Inc.

Smith J. & Kelley, D.E.   (2010, July).  UFC/ISC security design criteria overview and comparison.  Applied research associates, INC.  Retrieved from http://www.wbdg.org/resources/ufc_isc.php

UMUC. (2010). Interactive Case Study. Document posted in University of Maryland University College CSEC 620 9082 online classroom, archived at: http://webtycho.umuc.edu/

U.S. Department of Education. (2008, January). Administrative communications system, Departmental directive.  Retrieved from http://www2.ed.gov/policy/gen/leg/foia/acsom4114.pdf

Verduyn, B.  (2005).  2005 FBI Computer Crime Survey.  Retrieved from http://mitnicksecurity.com/media/2005%20FBI%20Computer%20Crime%20Survey%20Report.pdf

Voiskounsky, A.  & Smyslova, O.  (2003).  Flow-Based Model of Computer Hackers’ Motivation.  Cyber Psychology & Behavior vol.  6 (2), 171-180, doi: 10.1089/109493103321640365

Woodbury, C. (2007). The Importance of Data Classification and Ownership. Retrieved from http://www.srcsecuresolutions.eu/pdf/Data_Classification_Ownership.pdf

 

1 Comment

How Private is your Social Network?

How Private is Your Social Network?

Amy Wees

UMUC

CSEC620

April 19, 2012

 


 

Abstract

Social networking websites such as Facebook, Twitter, and Google+ provide communication and advertising services to individuals, businesses and marketers.  Facebook was ranked by Google as the most visited site in 2011 with 880 million users and an astonishing one trillion page views (Google, 2011).  What makes social networking sites so popular?  Most sites are free to use and provide an easy way to keep in touch with family and friends near and far. All of this sharing of information has allowed businesses to track user interests and gain valuable information about consumers that can be customized to assist in sales.  In the same way, users can conduct searches of people and find out more about them by viewing their profile information on various sites.  Although this information is convenient, is it safe?  Privacy policies are intended to inform the user of how their information will be stored, shared, and utilized by the entity collecting or requesting the data.  This paper will examine the use and privacy policies of three popular social networking sites: Facebook, Twitter, and Google+; and identify ways in which the policies can be improved to benefit both the website and customers.    


 

How Private is Your Social Network?

Social networking websites such as Facebook, Twitter, and Google+ provide communication and advertising services to individuals, businesses and marketers.  Social networking websites are so captivating that 82 percent of the world’s 1.2 billion Internet users spent one of every five minutes online logged into a social networking site in October of 2011 according to research firm comScore (comScore, 2011).  Facebook was ranked by Google as the most visited site in 2011 with 880 million users and an astonishing one trillion page views (Google, 2011).  What makes social networking sites so popular?  Most sites are free to use and provide an easy way to keep in touch with family and friends near and far.  People can share photos, quick updates on their life’s happenings, play games with friends, network for employment, sell products or market businesses, and meet new people with similar interests (GEV, 2011).

The popularity of social networking sites has caught the eye of marketers across the Internet.  Users can now “like” a business on Facebook, “tweet” about a product or interest on Twitter, or add their most recent book purchases to their Google+ profile page.  All of this sharing of information has allowed businesses to track user interests and gain valuable information about consumers that can be customized to assist in sales.  In the same way, users can conduct searches of people and find out more about them by viewing their profile information on various sites.  Although this information is convenient, is it safe?  If a user signs up and creates a profile of personal information to share with friends, how much of that information should be made public?  Personal information can be used to destroy a person’s reputation, steal their identity, or unfairly stereotype someone.  In 2009 researchers from Carnegie Mellon University were able to accurately predict the social security numbers of over 500,000 Americans using various online data sources to gather the individuals place and date of birth (Acquisti & Gross, 2009).  For this reason it is vital for personal information to be protected and only shared with consent of the individual.  Privacy policies are intended to inform the user of how their information will be stored, shared, and utilized by the entity collecting or requesting the data.  This paper will examine the use and privacy policies of three popular social networking sites: Facebook, Twitter, and Google+; and identify ways in which the policies can be improved to benefit both the website and customers.

Privacy Policies

A privacy policy is defined by BusinessDictionary.com as a “Statement that declares a firm’s or website’s policy on collecting and releasing information about a visitor. It usually declares what specific information is collected and whether it is kept confidential or shared with or sold to other firms, researchers or sellers” (Business Dictionary, 2012).  Websites are highly encouraged to have privacy policies although they are not required by United States law unless information is being collected from children under the age of 13.  There are currently bills in congress awaiting approval that will strengthen legislation for the protection of personally identifiable information (PII).  One such bill is the Commercial Privacy Bill of Rights.  This bill would require businesses to notify customers of practices for collecting information and protect that information but prevent businesses that are only marketing to customers from collecting or storing personal information (Kerry, 2011).

The Federal Trade Commission (FTC) is responsible for governing privacy policies and prosecuting those who violate their own privacy policies under the Federal Trade Commission Act (Connelly, 2010).  In a 2007 report to congress, the FTC noted that “although 85 percent of over 1400 websites surveyed collected personal information from consumers, only 2 percent of provided a comprehensive privacy policy and 14 percent provided notice to consumers regarding information practices” (Federal Trade Commission, 2007).  A more recent FTC report in 2012 continues to urge congress to enact baseline privacy legislation and notes that “overall, consumers do not yet enjoy the privacy protections proposed in the preliminary staff report” (FTC, 2012).  The FTC (2012) also noted they would concentrate on improving consumer privacy in five key areas:

  1. “Do Not Track” Allowing consumers mechanisms to avoid having their activity tracked on the web
  2. “Mobile” Helping businesses to create short and effective privacy disclosures for mobile applications
  3. “Data Brokers” Requesting legislation to require notification to consumers of personal information held by data brokers
  4. “Large Platform Providers” discouraging Internet service providers and other larger entities from tracking consumers activities online
  5. “Promoting Enforceable Self-Regulatory Codes” Developing and enforcing sector-specific codes of conduct for businesses and law enforcement to follow

Reading Privacy Policies

Privacy policies are commonly lengthy, use broad and confusing terminology and are confusing to consumers.  Research conducted at Carnegie Mellon University by Aleecia McDonald and Lorrie Cranor found the average policy to be 2500 words with a reading time of 10 minutes for a total of 250 hours per year for the average number of websites visited (Vedantam, 2012).  Perhaps this is why research shows that extremely few website visitors actually read privacy policies while others provide necessary personal information for sign up and hope for the best.  Forrester research studied visits to six popular travel websites for one month and found that less than 1 percent of visitors viewed privacy policies (Regan, 2001).

Social networking sites host an enormous amount of PII of their users.  In order for customers to protect their information, they need to ensure they understand the privacy policies and limit the amount of personal information they post online.  It is necessary to delve further into the privacy policies of these sites to determine whether privacy and online social networking are compatible.

Social Networking Website Privacy

Facebook

     Facebook has a short history of just over eight years but has made a big impact on the world.  According to Facebook’s about page, “Facebook’s mission is to make the world more open and connected. People use Facebook to stay connected with friends and family, to discover what’s going on in the world, and to share and express what matters to them” (Facebook, 2012).  Facebook reported 845 million active users and 425 million mobile users in December of 2011; 80 percent located outside of North America (Facebook, 2012).  To sign up for a free Facebook page, the user must provide a name, e-mail address, password, gender, and date of birth.  The date of birth is required to limit access to certain content to children and users are able to hide this information from their profile after signing up.  Additionally, the fine print above the sign up icon states users have read and understand the terms and data use policy (Facebook, 2012).

Terms

Terms provides a 4,205 word document titled “Statement of Rights and Responsibilities” that covers detailed ways in which the information on Facebook is used and the responsibilities of the user when adding, deleting or sharing information.  There is also a statement notifying the user that the document could change at any time and to become a fan of the governance page should they want notification of changes.  The document explains to the user that they can hide certain information from their profile but does not give any specifics on procedures for doing so (Facebook, 2012).  Using the average reading speed of 250 words per minute from McDonald and Cranor’s research, this document would take the user 17 minutes to read.

Data Use Policy

By clicking sign up, the user has also agreed to the data use policy which is Facebook’s title for its privacy policy.  This document consists of 6,910 words regarding what information is available to Facebook, how this information is used, how long the information is kept, and how the user may remove their information from the site by deleting their account.  Key information from the document is that the user’s name, photos and network are always publicly available.  Users’ photos, comments and information input about them by other users are also public.  Specifically, if a user posts a comment on a business’ page, that business now owns that comment and may use it anyway they like within or outside of Facebook (Facebook, 2012).   The data use policy also covers communication with advertisers and how to manage the data shown on users and friends pages.  Average read time of this document is 28 minutes at 250 words per minute.  Fortunately, Facebook has created interactive tools to help the user in navigating the document and viewing or changing privacy settings.  Figure 1 shows the navigation page for Facebook’s interactive tools (Facebook, 2012).

 

 

 

Figure 1

 

Most information on Facebook is publicly available unless the user follows the guidelines in the Data Use Policy to remove or protect their information from certain users.  Unfortunately, users cannot control information friends post about them or photos they are “tagged” in. Facebook notes in the terms that they collect data about a user’s location, interests, and friends in order to provide them with a better experience (Facebook, 2012).  Facebook’s policies have caused uproar from users and legal implications.  For example, in 2011 the FTC charged Facebook with breaking its own privacy policies without notifying the user by changing the site so information users thought was private was made public, allowing third party applications access to personal data of users and their friends, falsely claiming they had verified the security of applications, allowing access to users’ photos and videos even after accounts were deactivated or deleted, and violating data transfer laws between the U.S. and Europe.  The charges forced Facebook to clean up their policies and website and succumb to a privacy audits for the next 20 years (FTC, 2011).  More recently a user in Mississippi opened a class action lawsuit against Facebook claiming the site tracked her with cookies from “like” icons on various sites even when she was logged out of the site; something the sites’ privacy policy states will not happen (Goodin, 2011).

Improvements

Facebook CEO Mark Zuckerberg wrote in a November 2011 blog that he admits the company had made many mistakes with their privacy policies and outlined improvements to be made.  Among those mentioned were improvements to the privacy policy creating tools to help users understand and view what information was public (such as the interactive tool in Figure1), notifying users when they are “tagged” to allow them to review the postings, an application dashboard allowing users to view what information applications had access to, making friends lists easier to manage, and including permissions options on each post (Zuckerberg, 2011).

Facebook can also benefit by ensuring third party applications are safe and do not require separate privacy policies for users to consent to.  Users would benefit by logging into Facebook and trusting the applications they use are safe and are not collecting personal information and Facebook will prevent further lawsuits and trouble with the FTC.  Another benefit to the user and the company would be to simplify privacy settings across the board.  Users should not have to select a privacy setting every time they make a posting, or repeatedly go through their friends list to control who has access to what information.  User information and interests should only be shared with friends and not friends of friends or third party applications or advertisers.  The current policies force a user to “like” a business in order to interact with it.  After this takes place, that business has access to their information and use of all postings made by users.  Users should be able to show interest in a business without giving out their personal information for marketing purposes.  Facebook will benefit by gaining the trust of their users and allowing businesses to market information using the site without the liability of protecting additional customer information (Reisinger, 2010).

Twitter

     Twitter calls its site an “information network” and requires only an e-mail address and password from a user to sign up.  Users then have the option to add additional information to their profile such as a name, location, and website.  Twitter uses “tweets” or microblogs to communicate with the world.  Tweets consist of short messages or photos from a user, business, or community effort.  Users can participate in the conversation or just read comments from other entities or users that interest them.  Users can search for tweets from any user by topic or follow all of another user’s posts (Twitter, 2012).  In September 2011, Twitter had 100 million active users who logged in at least once a month and 362 million registered users (Bennett, 2012).

Terms

Twitter does not have a disclosure statement upon sign up of for users to consent to terms or privacy policies nor are these documents shown to the user as part of sign up but the terms do state that by accessing the websites’ services the user agrees to the terms.  The term document consists of 2,985 words explaining the user is responsible for all content posted on the site, the importance of use of strong passwords, that all content posted gives Twitter an unlimited license to reuse or copy that content, that Twitter is not responsible for any liability related to content posted and has the right to remove content if necessary (Twitter, 2011).  Overall the document is much more straight-forward than Facebook’s terms document and makes it clear to the user that when they post something on Twitter, it is available to the world.

Privacy Policy

The Twitter privacy policy is 1,440 words long and explains that any information provided to Twitter will be made public on Twitter anywhere in the world unless specified otherwise in the users profile or settings.  The policy states “Our Services are primarily designed to help you share information with the world. Most of the information you provide to us is information you are asking us to make public. This includes not only the messages you Tweet and the metadata provided with Tweets, such as when you Tweeted, but also the lists you create, the people you follow, the Tweets you mark as favorites or Retweet and many other bits of information” (Twitter, 2011).

The policy also covers the information Twitter collects from users including log data such as Internet Protocol addresses, mobile phone numbers, device names and searches; cookies, links clicked on, and interaction with advertisers or marketers.  Like Facebook, Twitter also notes that their policy can be changed at anytime and users will be notified via an e-mail or their Twitter account.  Unlike Facebook, Twitter does not offer third party applications within the site or request PII such as date of birth, age, relationship status, gender, education or work history, or names of family members.

Improvements

The biggest threats to a Twitter account are impersonation or misrepresentation by someone logged in as another user and users clicking on malicious web addresses posted by other users (Reisinger, 2009).  Unfortunately, Twitter has not figured out a way to authenticate accounts and passwords, leaving any third party application granting access to Twitter with the username and password of the users Twitter account.  Twitter has plans to implement an authentication similar to Facebook where the user downloads the mobile application, gives only their Twitter username and then uses Twitter to log onto the application and grant permission for access.  There are too many Twitter usernames and passwords floating around in third party application databases for users to feel safe about their credentials (Reisinger, 2009).  In February 2012, it was reported that the Twitter mobile application was copying users contact lists from their phones and storing this information on the website’s servers.  The application creators claimed it was an oversight in an attempt to assist users in finding their friends on Twitter (Skynews, 2012).

Security is linked to privacy when accounts are compromised and a person’s information used without their consent.  Twitter must find ways to improve sign on services, set clear requirements for third party applications, and educate users on dangers of providing account details to non-affiliates.

Google+

     Google+ is very similar to Facebook in that a profile is created and users provide their name, employment details, interests, and various other details to a page that friends can see.  Google+ differs in that it was designed with social circles in mind, allowing users to add their contacts to circles according to what details they want the members of that circle to see.  For example, instead of posting a status message to their page and deciding who can see each individual status message as it is with Facebook; Google+ allows users to exempt an entire group from all status messages, simplifying the process.  Google+ also allows users to view their page as it looks to each social circle at any time, without having to navigate to a special tool like Facebook uses.  Additional features unique to Google are video hangouts where a group of friends can video chat at the same time and the ability for users to make public posts and blogs viewable to the entire community (Google, 2012).  In February 2012 Google+ had over 100 million registered users and membership is growing at a fast pace (Allen, 2012).

Terms

            In March of 2012 Google replaced 60 separate documents used to define terms of use and privacy within its various services and created one policy for all services.  There is an overview page explaining the changes and a quick link to terms of service and privacy.  The terms of service are similar to Facebook and Twitter in that they explain that any content posted is now owned by Google with license to use as needed.  Other items of importance are that open source software owned by Google can be used by users but not copied or redistributed and that the liability of Google is limited to the amount paid to use the service (Google, 2012).

 Privacy Policy

Google’s privacy policy explains “what information is collected and why, how that information is utilized, and how to access and update information” (Google, 2012).  The policy is similar to Twitter in that it explains that Google collects and stores data from the information given for a public profile, device or hardware information, cookies, log information, and location and application specific information related to a user’s operating system.  Similar to Facebook, Google explains that they use the information collected to provide an improved and tailored user experience.  The policy also notes that information will be shared with third parties only with a users’ consent (Google, 2012).

Improvements

Google’s policy lacks specific details on how to update incorrect user information or restrict information only to certain parties.  This could be improved by providing links for updating information within the privacy policy for each service offered.  Although the new all-in-one privacy policy is claiming to make for an easier user experience, Google has been under scrutiny as many customers do not want their private information shared between services and combined into one single profile.  An article on RT.com news states “it’s not like Google doesn’t already collect a lot of information about its customers. When you are using Android mobile phones, Google can access your contacts and location. If you are searching for something on the internet, Google remembers all the search terms. When you sign into your Google account, it can track the sites you visit” (RT, 2012).  This scrutiny is combined with reports that Google tracked Apple device users without their consent by exploiting an anti-cookie tracking mechanism in the Safari web browser (Rawson, 2012).

Google can improve its privacy policy by making specific information regarding protecting information within each service easy to find and understand.  For example, currently when in Google+ the privacy policy users click on is the all-in-one policy and provides no specifics on how to protect the Google+ profile except within user tutorials.  Users should know how their information is being used within each Google service and how they can change their privacy settings or opt-out of information sharing.  Google+ improved on the privacy of its social network site pages over Facebook by creating social circles but has room to improve upon the short and broad termed privacy policy covering all of its many services.

Conclusion

            Social networking sites like Facebook, Twitter, and Google+ have changed the way people communicate and the way businesses market around the world.  There are so many options to share photos, products, life events, videos and opinions online.  Unfortunately somewhere amongst all of the excitement and new technology privacy was lost.  Users learned the hard way not to get too personal online after reputations were destroyed, identities stolen and feelings hurt.  Technological innovators created cool new applications without security or privacy in mind and those that have survived the backlash from citizens and governments are backtracking to fix old software and redesigning new applications.  Legislation is needed to enforce privacy policies and allow the FTC to regulate and audit business standards for privacy protection.   The social networking websites that have privacy policies need to make improvements in the way these policies are written to ensure they are easy for the user to navigate, read and understand.  Equally necessary is the ability of the business to comply with the privacy policies they create.  The way the world communicated may be changing by the day but privacy should not and cannot be ignored in the innovations of the future.

 

           


 

References

Acquisti, A., & Gross, R. (2009). Predicting Social Security numbers from public data. PNAS, 10975–10980.

Allen, P. (2012, February 1). Google+ Passes 100 Million Users. Retrieved from Google+: https://plus.google.com/117388252776312694644/posts/9zr9iwmN4XL

Bennett, S. (2012, January 13). Twitter on Track for 500 Million Total Users by March. Retrieved from All Twitter: http://www.mediabistro.com/alltwitter/twitter-active-total-users_b17655

Business Dictionary. (2012). Privacy Policy. Retrieved from BusinessDictionary.com: http://www.businessdictionary.com/definition/privacy-policy.html

comScore. (2011, December 21). It’s a Social World: Social Networking Leads as Top Online Activity Globally, Accounting for 1 in Every 5 Online Minutes. Retrieved from comScore: http://www.comscore.com/Press_Events/Press_Releases/2011/12/Social_Networking_Leads_as_Top_Online_Activity_Globally

Connelly, R. V. (2010, September 28). What is a Privacy Policy? Retrieved from Render Visions Consulting: http://www.rendervisionsconsulting.com/blog/what-is-a-privacy-policy/

Facebook. (2012). Data Use Policy. Retrieved from Facebook: http://www.facebook.com/full_data_use_policy

Facebook. (2012). Newsrooms. Retrieved from Facebook.com: http://newsroom.fb.com/content/default.aspx?NewsAreaId=22

Facebook. (2012). Terms. Retrieved from Facebook: http://www.facebook.com/legal/terms

Federal Trade Commission. (2007, June 25). Privacy Online: A Report to Congress. Retrieved from FTC.gov: http://www.ftc.gov/reports/privacy3/toc.shtm

FTC. (2011, November 29). Facebook Settles FTC Charges That It Deceived Consumers By Failing To Keep Privacy Promises. Retrieved from FTC.gov: http://www.ftc.gov/opa/2011/11/privacysettlement.shtm

FTC. (2012, March). Protecting Consumer Privacy in an Era of Rapid Change. Retrieved from FTC.gov: http://ftc.gov/os/2012/03/120326privacyreport.pdf

GEV. (2011, April 14). Popularity of Social Networking Sites. Retrieved from GEV: http://www.gev.com/2011/04/popularity-of-social-networking-sites-3/

Goodin, D. (2011, October 14). Facebook accused of violating US wiretap law. Retrieved from The Register: http://www.theregister.co.uk/2011/10/14/facebook_tracking_lawsuit/

Google. (2011, July). The 1000 most-visited sites on the web. Retrieved from Google: http://www.google.com/adplanner/static/top1000/

Google. (2012). Learn More. Retrieved from Google+: http://www.google.com/+/learnmore/

Google. (2012). Privacy Policy. Retrieved from Google Policies and Principles: http://www.google.com/intl/en/policies/privacy/

Google. (2012). Terms of Service. Retrieved from Google Policies and Procedures: http://www.google.com/intl/en/policies/terms/

Kerry, J. S. (2011, April 12). Kerry, McCain Introduce Commercial Privacy Bill of Rights. Retrieved from kerry.senate.gov: http://kerry.senate.gov/imo/media/doc/Commercial%20Privacy%20Bill%20of%20Rights%20Press%20Release1.pdf

Rawson, C. (2012, February 17). Google allegedly bypassed privacy settings to track user browsing in Safari. Retrieved from tuaw.com: http://www.tuaw.com/2012/02/17/google-allegedly-bypassed-privacy-settings-to-track-user-browsin/

Regan, K. (2001, June 15). Does Anyone Read Online Privacy Policies? Retrieved from ecommerce times: http://www.ecommercetimes.com/story/11303.html

Reisinger, D. (2009, February 12). Twitter security: There’s still a lot of work to do. Retrieved from CNET News: http://news.cnet.com/8301-17939_109-10162649-2.html

Reisinger, D. (2010, May 24). 10 Ways Facebook Can Improve Privacy and Security. Retrieved from eweek.com: http://www.eweek.com/c/a/Cloud-Computing/10-Ways-Facebook-Can-Improve-Privacy-and-Security-856070/

RT. (2012, January 25). Google to track users… like never before! Retrieved from RT.com: http://rt.com/news/google-privacy-policy-tracking-671/

Skynews. (2012, February 16). Twitter admits peeking at address books, announces privacy improvements. Retrieved from Fox News: http://www.foxnews.com/scitech/2012/02/16/twitter-admits-peeking-at-address-books-announces-privacy-improvements/

Twitter. (2011, June 1). Terms of Service. Retrieved from Twitter: https://twitter.com/tos

Twitter. (2011, June 1). Twitter Privacy Policy. Retrieved from Twitter.com: https://twitter.com/privacy

Twitter. (2012). About. Retrieved from Twitter: http://twitter.com/about

Vedantam, S. (2012, April 19). To Read All Those Web Privacy Policies, Just Take A Month Off Work. Retrieved from npr.org: http://www.npr.org/blogs/alltechconsidered/2012/04/19/150905465/to-read-all-those-web-privacy-policies-just-take-a-month-off-work

Zuckerberg, M. (2011, November 29). Our Commitment to the Facebook Community. Retrieved from The Facebook Blog: http://blog.facebook.com/blog.php?post=10150378701937131

 

 

, , , ,

Leave a comment

The Life and Crimes of a Carder

 

 

 

 

The Life and Crimes of a Carder

By: Amy L. Wees

University of Maryland University College

CSEC620

April 6, 2012

 

 

 

Abstract

The Internet carding industry is responsible for the identity theft, fraud, and financial losses of countless individuals and businesses every year.  The most lucrative example of the carding network came from a website called CarderPlanet.  Criminals steal account information, credit cards, and personally identifiable information in a variety of ways, then buy, sell or trade the information online, after which the information can be used to make purchases, withdraw money or further the carder’s career.  Though CarderPlanet was taken down and many arrests were made, similar sites and forums are still in existence and flourishing across the Internet.  To learn more about the way carding works and why it is so appealing to criminals; one can look at the ease of the craft, the multiple ways to get involved, and the habits and profiles of arrested criminals. This paper will explore the carding crime, the criminals’ actions and motivations, lessons learned from victims and prevention strategies.

 

Keywords: Carders, Identity Theft, Credit Card Fraud, Cyber-crime

 

The Life and Crimes of a Carder

The words of a fictitious Internet advertisement boast “Don’t miss it! There is a limited time only sale on stolen identifications, debit and credit cards including pins and CVVs, counterfeiting equipment, bank account information and PayPal accounts!  Get dumps of U.S. accounts for as little as 20 dollars!  Learn how to make your own credit cards with our specialized equipment.  It has never been easier to get your hands on all of this FREE money!! Fine print: Membership required, website can be relocated at any time and cannot be held liable for unlawful transactions.  All transactions are risky and success is not guaranteed”.

Unfortunately the above advertisement illustrates a scenario that is very real.  The Internet carding industry is responsible for the identity theft, fraud, and financial losses of countless individuals and businesses every year.  Criminals steal account information, credit cards, and personally identifiable information in a variety of ways, then buy, sell or trade the information online, after which the information can be used to make purchases, withdraw money or further the carder’s career.   Though these criminals can make a lot of easy money and mask their identities behind online codenames to avoid capture, there are many separate roles played in this crime ring and different motivations for involvement.  This paper will explore the carding crime, the criminals’ actions and motivations, lessons learned from victims and prevention strategies.

 

The Threat

The most lucrative example of the carding network came from a website called CarderPlanet.  CarderPlanet was launched in 2003 and was quickly known in the underground community as the place to go to learn the secrets of the carder trade and how to make money from stolen credit cards and identities.  Forum topics on the site covered everything from beginners’ instructions, sales or trades of credit cards, identity theft information and sales, programming, hacking and carder software, how to maintain anonymity and security, and employers offering carding jobs (Munns, 2010).    The site had fake contact information for an address in Ho Chi Minh City, Vietnam and an administrator who went by the alias “Script”.  “Script” was so bold he even created several online advertisements boasting of CarderPlanet’s success.  One of the flashy advertisements makes the following statements in capital letters: “NEED RELIABLE PARTER? CARDERPLANET! WORLD-CLASS CARDERS; GENIUS OF PROCESSING SECURITY; PROFESSIONALS OF PAYMENT SYSTEMS; WE GIVE YOU THE KNOWLEDGE; PROFITABLE STRATEGIES, CARDERPLANET TACTICS AND TUTORIALS; CARDERPLANET IS INEVITABLE” (F-Secure, 2008).

The site was easy to find for Internet browsers and Federal Bureau of Investigation (FBI) investigators attempting to hunt down cyber criminals.  Authorities gained a lot of leads from posts on the site which could be linked to open cases, but only names of aliases and little in regards to location or actual identities of criminals could be found.  Interpol soon was involved, and with the cooperation of multi-national law enforcement agencies, arrests were made and the site brought down (Munns, 2010).  In a 2010 FBI press release after the arrest of one of CarderPlanet’s founders Vladislav Anatolievich Horohorin, U.S. Secret Service Assistant Director for Investigations Michael Merritt stated:

“The network created by the founders of CarderPlanet, including Vladislav Horohorin, remains one of the most sophisticated organizations of online financial criminals in the world; this network has been repeatedly linked to nearly every major intrusion of financial information reported to the international law enforcement community” (U.S. Department of Justice , 2010).

Though CarderPlanet was taken down and many arrests were made, similar sites and forums are still in existence and flourishing across the Internet.  To learn more about the way carding works and why it is so appealing to criminals; one can look at the ease of the craft, the multiple ways to get involved, and the habits and profiles of arrested criminals.

Threat Profiles and Scenarios

According to University of Maryland University College (2010), a threat profile has five elements: asset – an item of value, whether data or physical property; actor – the person causing damage; motive – the reason for the action; access – the means of obtaining the item; and outcome- the eventual result of the action (p. 9).  For the purpose of this paper threat profiles will be given based on observed and reported scenarios of carders.

Scenario 1: Data Breach via Wardriving

In 2010, eleven cybercriminals were charged with conspiracy, computer intrusion, fraud, identity theft and various other crimes after stealing forty million credit and debit card numbers via wardriving.  The criminals tapped into the wireless networks using laptops while parked in front of various retailers including Sports Authority, TJ Maxx, Barnes & Noble, Marshalls and Office Max.  After gaining access to the network packet sniffers were installed to capture account numbers as cash registers processed purchases (U.S. Department of Justice , 2010).

Threat Profile

The asset in this case is the credit and debit card numbers.  There were 11 separate actors, most with the motive of financial gain as account numbers were sold over the internet or imprinted on magnetic strips of counterfeit cards and used to withdraw thousands of dollars (DOJ, 2008).  Ukrainian Maksym Yastremski was a well-known online seller of stolen cards and supposedly gained eleven million dollars from his crimes.  U.S. citizen Albert Gonzalez was also caught while simultaneously acting as a Secret Service informant on a separate operation (Poulsen, 2008).  Gonzalez’s motive may have been to lessen his previous sentence by working as an informant but also to use this position as a cover up to participate in other crimes for financial gain.  He may have been addicted to this crime if even after being caught he could not stop.  The outcome of this crime was severe financial losses to several major retailers.  The cost of the intrusion to TJ Maxx alone was reported to be over 130 million dollars (Poulsen, 2008).

Prevention Strategies

            How could these wardriving attacks have been prevented?  Data on a wireless network is transmitted via radio instead of over a wire, leaving it highly vulnerable to interception.  The first step in protection is to keep all essential data on a more secured wired network and not connect a device loaded with critical data to an unsecured wireless network.  Next, defaults on routers should be changed from factory settings and the Service Set Identifier (SSID) should not be broadcasted.  When setting passwords, ensure they are complex enough to deter a password cracker.  Third, Media Access Control (MAC) address filtering and Dynamic Host Configuration Protocol (DHCP) can be used to limit the number of workstations or devices allowed to access the network.  Last and most importantly, ensure the information sent over the wireless network is encrypted.  The best encryption standard is Wi-Fi Protected Access (WPA) 2 and is included in the latest router configurations.  Information should also be protected at the source using anti-virus programs, personal firewalls, and wireless network firewalls.   For businesses that may need even more protection, virtual private networks (VPN) can be used to ensure the person connecting to the network enters via a secure gateway (Comodo, 2006).

Scenario 2: Skimming

            In 2011, carders were arrested in several states after installing skimming devices on top of existing automatic teller machine (ATM) card slots on the entryway door used for access to the machine.  Additionally, carders installed pinhole cameras pointed at the ATM number pad (KTLA News, 2012).  The skimming device captured the account numbers on customers’ debit cards and carders later used these numbers in combination with the pins from captured videos to create counterfeit cards used for purchases and cash withdrawals (Kitten, ATM Skimmer Sentenced to Jail, 2011).

Threat Profile:

The asset in this scenario is the account data and pin numbers.  In this case there were three actors believed to be linked to a larger crime ring as several separate arrests were made for similar crimes in New York.  Gabriella Graham plead guilty of acting a lookout for other members of her team while they installed cameras and skimming machines at eleven banks in Connecticut, Massachusetts and Rhode Island.  She also admitted to creating and using counterfeit debit cards.  At first glance Graham’s motive appears to be financial gain, though she was labeled as a mule by authorities and offered a lower sentence in exchange for her testimony against accomplices.  This suggests she may have been pressured into involvement by others.  The skimming attacks cost banks and customers over $335,000 (Kitten, ATM Skimmer Sentenced to Jail, 2011).

Prevention Strategies

            Julie McNelley, a fraud analyst for Aite Group, states “ATM skimming has helped push debit-related fraud losses to the top of the card-fraud list; debit losses now outpace credit card fraud” (Kitten, Skimmers Busted by Fraud Detection, 2011).  Customers and banks need to know how to protect themselves from skimming.  Customers need to keep an eye on their account statements, look for irregular charges and report them to the bank immediately.  Credit cards offer fraud protection but debit cards are limited to a $50 limit by the FDIC’s consumer protection rule.  Therefore if a customer’s bank account is drained due to theft or fraud the bank does not have to refund the money unless a full investigation is completed to determine there was no fault of the customer (Sullivan, 2004).  Some banks use fraud detection software that limits the amount of cash that can be withdrawn on a daily basis and looks for irregular customer spending habits such as large dollar amounts outside of the immediate area.

Customers should also pay attention to ATM card slots or credit card swiping machines that look out of the ordinary.  If it appears as if something is attached to the original machine, do not use it and report suspicion to the vendor (Rogak, 2012).  Skimmers have also been found on cashiers and wait staff at restaurants, so customers should pay at the register when possible and not leave their card with staff for long periods of time (such as for a bar tab).  Retailers should mount security cameras over all areas in the store where transactions are processed to deter employees from theft or fraud (Crane, 2008).

Scenario 3: Phishing

In December, 2011 the United Kingdom’s e-crime unit caught six cybercriminals running a phishing scam targeted at college students across the U.K.  The criminals sent e-mails to students at various schools asking them to update the login details to their student loans.  Some students followed the e-mail link to an official looking website and provided enough personal information for criminals to gain access to the student’s bank accounts (Kovacs, 2011).

Threat Profile:

            The asset was the student loan accounts and the bank accounts.  The actors, whose names were not released, were four men and two women many in their mid-20’s and one age 49.  Police found computers and storage media used to access the stolen information (Neal, 2011).  The motive was financial gain as amounts of up to 5,000 pounds were withdrawn at one time adding up to over 1 million pounds stolen.  The U.K. charged the suspects with “conspiracy to defraud, money laundering and other offences under the Computer Misuse Act” (Ashford, 2011).   The outcome to the victimized students and banks is unknown.

Prevention Strategies

Consumer awareness is key when it comes to preventing phishing attacks as the amount of phishing e-mails sent and the differences in subjects are substantial.  Consumers need to know what to look for that is commonplace in many phishing e-mails and web addresses so they are able to recognize the scams in their inboxes.  The Anti-Phishing Working Group (APWG) offers consumer advice and recommendations; a brief summary is given:

  • Do not respond to e-mails with requests for personal financial information; banks and other businesses will not ask for this information via e-mail
  • Avoid clicking on links in an e-mail.  Type the known web address in the address bar instead
  • When purchasing items online use trusted retailers and ensure the https:// secure site is enabled as well as the padlock icon
  • Install a web browser toolbar that will provide alerts when browsing known fraudulent websites
  • Report phishing e-mails to the company being spoofed, the Federal Trade Commission or the Internet Crime Complaint Center of the FBI (Anti-Phishing Working Group, 2012).

Scenario 4: The Middle Man

            The U.S. Secret Service reports they have arrested “one of its five most wanted cybercriminals in the world” (Metzger, 2010).  “BadB” was an online credit card trafficker who was one of the founders of CarderPlanet.com and later opened another site named badb.biz.  “BadB” sold credit card dumps to Secret Service agents on one of his sites and collected money for the sale through a Russian hosted site called Webmoney.  The sale led to his eventual identification and arrest in Nice, France (U.S. Department of Justice , 2010).

Threat Profile:

The asset in this scenario is the credit card dumps, which are large amounts of electronic copies of the magnetic stripes of stolen credit card numbers offered for sale in bulk in online forums (CreditCards.com, 2012).  The actor is Vladislav Horohorin, a.k.a. “BadB”, who bought and sold stolen credit card data online in web forums that he reportedly scrupulously participated in by posting chat rules against swearing and warnings of devious users.  On his own site, badb.biz he advertised his services with animated cartoons showing Russian political gain by stealing from the U.S. and carders receiving medals for their work.  Horohorin’s motive is more than just financial.  Being a founder of CarderPlanet and watching fellow carders go to prison did not derail him.  He continued on as a leader in the carder crime ring and did not make any attempts to cover his tracks, making noise with his bold cartoon advertisements, his website, and his avid participation on other popular carding sites (Metzger, 2010).  His actions show political motivations as he was determined to show Russian carders as heroes and U.S. citizens as easy targets who deserve to be criminalized.  Horohorin also showed that his crimes were motivated by his ego.  He wanted to see how much he could get away with.  It was obvious he thought he was untouchable.  The outcome of Horohorin’s crimes was his arrest.  He is charged with access device fraud and aggravated identity theft with a total maximum sentence of up to 12 years in prison and fines of up to $500,000 (U.S. D.O.J., 2010).

Prevention strategies

Although authorities have cracked down on carders, the problem remains almost too large to conquer.  There is no sign carders are slowing down in their crimes.  The credit card and banking industry must find better ways to combat the simplistic ways in which account data can be compromised.  Europe, Japan and various other areas around the globe have moved to a new standard using credit cards embedded with a computer chip instead of a magnetic strip.  The new cards also require the user to enter a pin to verify their identity at the time of purchase (Tulipan, 2012).  The use of this card prohibits skimmers from being used to steal credit card data and is a step in the right direction toward more secure credit and debit cards.  Another option would be to utilize biometric systems either instead of cards or to verify the identity of the owner of a card in lieu of a pin.

History has shown us that regulating information shared on the Internet is nearly impossible.  Regulating users of the Internet is also exceedingly tough as many of the sites in which hackers and cybercriminals converge are quickly moved from one location or host to another or utilize dynamic internet protocol addresses.  Law enforcement has come together on a global scale to bring cybercriminals to justice, but there are many more criminals to arrest than there are cyber-crime teams to dedicate to their capture.  Another solution posed by journalist Misha Glenny while speaking for Technology Entertainment Design (TED) talks is to hire the hackers to design security solutions instead of jailing them.  Glenny studied some of the most notorious cybercriminals and noted that nearly all of them learned their skills in their teens before their moral compass had developed, demonstrated advanced skills in science and math, and lacked social skills.  He also noted that countries like Russia and China are recruiting these hackers before and after they get into crime and utilizing them to develop their cyber-offensive capabilities (Glenny, 2011).  Glenny ends his presentation with an interesting point; he says “We need to find ways of offering guidance to these young people, because they are a remarkable breed.  And if we rely, as we do at the moment, solely on the criminal justice system and the threat of punitive sentences, we will be nurturing a monster we cannot tame” (Glenny, 2011).

Conclusion

Identity theft and credit card fraud are a serious global problem.  Criminals have various motivations for committing these crimes as carding does not require any advanced hacking skills, it is fairly easy to hide securely behind an Internet address and alias, and there is money to be made.  Victims must report crimes and suspicious activity to law enforcement and consumer protection agencies and also stay informed on the latest security threats and prevention strategies.

 

References

Anti-Phishing Working Group. (2012). Consumer Advice: How to Avoid Phishing Scams. Retrieved from APWG: http://www.antiphishing.org/consumer_recs.html

Ashford, W. (2011, December 9). UK police arrest six in £1m phishing scam. Retrieved from Computer Weekly: http://www.computerweekly.com/news/2240112250/UK-police-arrest-6-for-1m-phishing-scam

Comodo. (2006, October 11). Wardriving: What is it, how common is it, and how to protect against it. Retrieved from Comodo: http://forums.comodo.com/general-security-questions-and-comments/wardriving-what-is-it-how-common-is-it-and-how-to-protect-against-it-t3199.0.html;msg23829#msg23829

Crane, A. (2008, September 9). 5 steps to avoid ID theft at the register. Retrieved from CreditCards.com: http://www.creditcards.com/credit-card-news/merchant-data-security-identity-theft-tips-1275.php

CreditCards.com. (2012, April 6). Credid Card Glossary: Terms and Definitions. Retrieved from CreditCards.com: http://www.creditcards.com/glossary/term-dump.php

Department of Justice. (2008, August 5). Retail Hacking Ring Charged for Stealing and Distributing Credit and Debit Card Numbers from Major U.S. Retailers. Retrieved from Department of Justice: http://www.justice.gov/opa/pr/2008/August/08-ag-689.html

F-Secure. (2008, March 14). Digging the Archives for Case CarderPlanet. Retrieved from F-Secure.com: http://www.f-secure.com/weblog/archives/00001403.html

Glenny, M. (2011, July). Hire the Hackers. (M. Glenny, Performer) TED, Edinburgh, U.K.

Kitten, T. (2011, December 28). ATM Skimmer Sentenced to Jail. Retrieved from Bank Info Security: http://www.bankinfosecurity.com/articles.php?art_id=4362

Kitten, T. (2011, November 22). Skimmers Busted by Fraud Detection. Retrieved from Bank Info Security: http://www.bankinfosecurity.com/articles.php?art_id=4262

Kovacs, E. (2011, December 10). Six Phishers Arrested for Scamming UK Students. Retrieved from Softpedia: http://news.softpedia.com/news/Six-Phishers-Arrested-For-Scamming-UK-Students-239744.shtml

KTLA News. (2012, February 7). 2 Arrested for Installing Skimming Device at Chase Bank. Retrieved from KTLA News: http://www.ktla.com/news/landing/ktla-skimming-device-chase-bank,0,1600909.story

Metzger, T. (2010, August 12). Alleged cybercriminal, cartoonist arrested in France. Retrieved from Creditcards.com: http://www.creditcards.com/credit-card-news/carderplanet-badb-data-thief-cybercriminal-arrested-1282.php

Munns, D. (2010, August 12). The secret history of CarderPlanet.com and Dmitry Ivanovich Golubov. Retrieved from CreditCards.com: http://blogs.creditcards.com/2008/05/secret-history-of-carderplanet.php

Neal, D. (2011, December 9). Arrests made for student phishing scam. Retrieved from The Inquirer: http://www.theinquirer.net/inquirer/news/2131361/arrests-student-phishing-scam

Poulsen, K. (2008, August 5). Feds Charge 11 in Breaches at TJ Maxx, OfficeMax, DSW, Others. Retrieved from Wired: http://blog.wired.com/27bstroke6/2008/08/11-charged-in-m.html

Rogak, L. (2012, April 6). 10 things you should know about identity theft. Retrieved from CreditCards.com: http://www.creditcards.com/credit-card-news/help/10-things-you-should-know-about-identity-theft-6000.php

Sullivan, B. (2004, February 18). ID theft victims face tough bank fights. Retrieved from MSNBC: http://www.msnbc.msn.com/id/4264051/ns/business-online_banking/t/id-theft-victims-face-tough-bank-fights/#.T3kvBdm-2So

Tulipan, M. (2012). European Credit Card Standard Leaves Americans Stranded. Retrieved from The Saavy Explorer: http://www.thesavvyexplorer.com/index.php/life-and-style-mainmenu-31/36-tips/689-european-credit-card-standard-leaves-americans-stranded

U.S. Department of Justice . (2010, August 11). Alleged International Credit Card Trafficker Arrested in France on U.S. Charges Related to Sale of Stolen Card Data . Retrieved from Federal Bureau of Investigation: http://www.fbi.gov/atlanta/press-releases/2010/at081110.htm

University of Maryland University College. (2010). Human Aspects in Cybersecurity: Ethics, Legal Issues, and Psychology. Module 7. UMUC.

 

 

, ,

3 Comments