ISACA Now Blog

Knowledge & Insights > ISACA Now > Posts > One year later: Lessons learned from the Japanese tsunami

One year later:  Lessons learned from the Japanese tsunami

| Posted at 9:25 AM by ISACA News | Category: Risk Management | Permalink | Email this Post | Comments (5)

A year has passed since the tsunami struck my country. Terrible tragedies occurred that day, but it is important to look back and find valuable lessons related to disaster recovery and business continuity management.


First, let us look at what happened:

1.      First (First three days)

   Many enterprises lost key personnel (decision makers, those who were responsible for risk response, etc.) as a result of the tsunami.

   The telephone/communication networks were congested as a result of the disaster.

   The electricity supply was stopped.

   Many organizations suffered simultaneously, meaning that not only was an enterprise suffering and/or unable to function, but so were its vendors in many cases.

   IT resources were heavily damaged or lost.

   Transportation routes were heavily damaged.

   Earthquake aftershocks continued to occur.

2.      Second (First three months)

   Rotational (but not well-planned) blackouts were enforced.

   Continuing electricity shortages occurred in the disaster area.

   Legal “Saving Electricity” measures were enforced in the Kanto area (Tokyo and surrounding prefectures).

3.      Third (Four months and beyond)

   Electricity shortage in the western side of Japan became severe. Because of the shutdown of nuclear power plants, the area saw migrations of factories, data centers and other countries from the Kanto area to the western side of Japan.


Next, let us look at the impacts related to IT-related business issues:

1.     First

   Chains of command were lost. Almost nobody could decide appropriate measures for IT infrastructure recoveries.

   Communication channels were lost. It was nearly impossible to get correct information such as, “Who is still living?”, “Who is in charge?”, “What happened?” and “What is the current status?”.

   Stock of fuel for emergency power supply was very limited (one or two days).

   Server rooms were strictly protected by electronic security systems, so without enough electricity, these security systems became obstacles for emergency responses as the aftershocks kept coming. At some organizations, they kept the door of their server room open.

2.     Second

   Substitute facilities or equipment (servers, PCs, etc.) were not supplied quickly from vendors, because so many organizations had suffered from the disaster.

   Backup centers also suffered in the disaster. Therefore, quick recovery was almost impossible at many organizations.

   Because of rotational blackouts, companies could not access their servers from remote offices.

   Data recovery was a very heavy task. If a company’s backup rotation was once per week, they lost almost one week’s worth of data. In some cases, both electronic and paper-based backups of data were lost.

   In the areas evacuated as a result of the nuclear power plant accidents), nobody could enter their own offices.

   Many IT-related devices were robbed and many devices that were washed away by the tsunami eventually fell into the wrong hands.

   Fuel supply was stopped in many areas because transportation routes were not recovered quickly.

   Many organizations moved their data centers and factories to the western side of Japan.

   The emergency power supply could not operate for long periods of time. They were designed for short- term operation.

   Monthly data processing was impossible in many organizations. Big delays occurred.

3.     Third

   At the western side of Japan, many organizations confronted electricity shortages and could not start full operations.


Now that we have a full picture of the devastation that occurred, let us look at the lessons we’ve learned:

1.     Bad situations can continue for a long time. Quick recovery is sometimes impossible. Be prepared for this.

2.     Prepare as many people as you possibly can who can respond to disasters. Having a fixed definition of roles and responsibilities may be hazardous.

3.     Data encryption is indispensable.

4.     A cloud computing-like environment can be very helpful in situations like this.

5.     Uncertainty-based risk management is necessary.

   In Japanese history, many huge earthquakes and tsunamis were recorded. We must study our history more carefully and note that those things can happen to us.

   Recently on the same seismic zone (the Circum Pacific Earthquake Belt Zone), many heavy earthquakes and tsunamis have occurred. The “Sumatra Disasters,” from 2004 to 2010, caused major earthquakes and tsunamis, including a magnitude 9.1 earthquake in 2004. We must learn from these disasters. And we must take account of the fact that a same-size disaster can occur anytime on the same seismic zone.

   We cannot predict exactly when, where and how. But we can prepare for uncertainties.

6.     Preparation of many risk scenarios may be useless. Too many risk response manuals will serve as a “tranquilizer” for the organization. Instead, implement a risk management framework that can serve you well in preparing and responding to a disaster.


Disasters can occur anytime and anywhere. Sit down with your colleagues and make a plan now.



Masatoshi Kajimoto, CISA, CRISC

Chair of ISACA’s GRA Regional Subcommittee 1


We welcome your comments! Please log in using the Sign In link at the top right of this page and then leave your comment in the box at the end of the post.

To view all blog posts, please click on the ISACA Now link in the blue box on the left.



Great Lessons Learned thank you. I will share with my organization. MHanson CISM, C|EH
Thames11 at 3/14/2012 5:58 AM

Re: One year later:  Lessons learned from the Japanese tsunami

We all must share this article with our Business Continuity team. Certainly, will do it myself.
Donatas at 3/15/2012 4:08 AM

Clear & Useful

Thank you.
I already started sharing these lessons with my team.
And I'm going to do so organization-wide.
MoDiop at 3/15/2012 6:48 AM

Thanks for Insight

I will share this with my team.
Simon639 at 3/17/2012 2:23 AM

Useful & Thanks

I will sharing same with my colleagues.

Oorvashi877 at 3/20/2012 3:26 AM
You must be logged in and a member to post a comment to this blog.