Search

Showing posts with label Strategic Plan. Show all posts
Showing posts with label Strategic Plan. Show all posts

Monday, October 14, 2013

10 successful big data sandbox strategies


Keep in mind these ten strategies when building and managing big data test environments. 
bigdata-business-pain-140x105.jpg
Being able to experiment with big data and queries in a safe and secure “sandbox” test environment is important to both IT and end business users as companies get going with big data. Nevertheless, setting up a big data sandbox test environment is different from establishing traditional test environments for transactional data and reports. Here are ten key strategies to keep in mind for building and managing big data sandboxes:

1. Data mart or master data repository?

The data base administrator needs to make a decision early-on as to whether to have test sandboxes use data directly from the master data repository that production uses, or whether the best solution is to replicate and splinter off sections of this data into separate data marts that are reserved for testing purposes only. The advantage of the full data repository is that testing actually uses data that is used in production, so test results will be more accurate. The disadvantage is that data contention can be created with production itself. With the data mart strategy, you don’t risk contention with production data—but the data will likely need to be periodically refreshed to stay in some degree of synchronization with data being used in production if it is going to closely approximate the production environment.

2. Work out scheduling

Scheduling is one of the most important big data sandbox activities. It ensures that all sandbox work is optimally being run. It usually achieves this by concurrently scheduling a group of smaller jobs that can be completed while a longer job is being run. In this way, resources are allocated to as many jobs as possible. The key to this process is for IT to sit down with the various user areas that are using sandboxes so everyone has an upfront understanding of the schedule, the rationale behind it, and when they can expect their jobs to run.  

3. Set limits

If months go by without a specific data mart or sandbox being used, business users and IT should have mutually acceptable policies in place for purging these resources so they can be put back into a resource pool that can be re-provisioned for other activities. The test environment should be managed as effectively as its production environment counterpart so that resources are called into play only when they are actively being used.

4. Use clean data

One of the preliminary big data pipeline jobs should be preparing and cleaning data so that it is of reasonable quality for testing, especially if you are using the “data mart” approach. It is a bad habit (dating back to testing for standard reports and transactions) to use data in test regions that is incomplete, inaccurate, or even broken—simply because it was never cleaned up before it was dumped into a test region. Resist this temptation with big data.

5. Monitor resources

Assuming big data resources are centralized in the data center, IT should set resource allowances and monitor sandbox utilization. One area often requiring close attention is the tendency to over-provision resources as more end user departments engage in sandbox activities.

6. Watch for project overlap

At some point, it makes sense to have a corporate “steering committee” for big data that tracks the various sandbox projects going on throughout the company to ensure that there is no overlap and/or duplicated effort.  

7. Consider centralizing compute resources and management in IT

Some companies start out with big data projects in specific departments but quickly learn that they can’t work on big data, do their daily work, and then manage compute resources, too. Ultimately, they move the equipment into the data center for IT to manage. This frees them to focus on the business and ways that big data can bring in value.

8. Use a data team

Even in sandbox experimentation, it’s important to have the requisite big data skills team on hand to assist with tasks. Typically, this team consists of a business analyst, a data scientist, and an IT support person who can fine-tune hardware and software resources and coordinate with database specialists.

9. Stay on task with business cases

It’s important to infuse creativity into sandbox activities, but not to where you totally forget the initial charge of the business case you’re trying to bring value to.

10. Define what a sandbox is!

Especially participants coming from the end business might not be familiar with the term “sandbox” or what it implies. Like the childhood sandbox, the purpose of a big data sandbox is to freely play and experiment with big data—but to do it with purpose. Part of this purposeful activity should be abiding by the ground rules of the sandbox, such as when, where and how to use it, as well as experimenting to derive meaningful results for the business.

Why everyone wants a private cloud


Concerns about security and control make the "private" cloud a more palatable model for many companies. How sound is this kind of thinking? 
Cloud_Compliance.jpg
“We’re going to the cloud for VDI (virtual desktop infrastructure), and we’re going to have our own cloud,” said an IT manager of a one-man shop (himself) at a manufacturing company with 20 employees.
The manager and the CEO of the company believed that they could implement their own private cloud by using a “cloud in a box” solution for office applications that would save the company money in the form of fewer license fees for office software. The way that they planned to implement the project was by relying on the cloud equipment vendor that had sold them the solution to provide both implementation and system-tuning expertise and support.
For these managers, there were also the benefits of “bragging rights”—because it’s popular today to have a cloud of your own, no matter how small you are.
The question is, why?
Inevitably, fears about the security of applications and data are the first things mentioned when the alternative of going to a public cloud comes up.
However, for many small companies with limited IT resources, data and application security have always been lax, even when they are running their own internal IT operations. Many of these companies routinely accept the downtime brought on by a denial of service attack (DNS) or the loss of data that is suffered when a system unexpectedly goes down.
So given this, why is it so important to have your own private cloud?
Some speculate that organizations have been developing their own IT infrastructures for years, and that these infrastructures have been used and continue to be used to host business critical applications for the organization. In addition, organizations, regardless of their size, like the idea of data sovereignty, where they can keep business critical data internally, without exposing it through widely available public interfaces that characterize the public cloud environment. Finally, businesses are aware that they must satisfy regulations and regulators, especially if they are in industries like finance or healthcare.
Still other companies are uncomfortable at relinquishing control of the information lifelines of their businesses to outside vendors, even if they are convinced that their data is absolutely secure. In back of this is a concern about control—and a fear that a breakup with a cloud vendor could lead to major risk and disruption for the business as it struggles to re-insource data that it should have never outsourced.
The truth is, we all understand that cloud is here to stay and that it will continue to make inroads into data centers and IT infrastructure. But what we don’t know is where the inevitable “pushbacks” are going to occur down the road.
“When you’ve been in IT for over thirty years, you see a lot of changes in thinking—and invariably, thought cycles reverse and “old thoughts” resurface in new ways,” said former and now retired CIO for Caterpillar, John Heller. Heller was talking about the days of centralized computing in the 1960s and 1970s which then gave way to decentralized, distributed computing in the 1980s—and then once again returned to centralized computing with the growth of virtualization in the 1990s and 21st century.
Consequently, it isn’t too far-fetched for organizations to hedge against the turns that technology thinking  takes—and to embark on their own cloud journeys with the desire to understand fully what cloud is all about and how it works, regardless of how small they are. For most companies, this means engagement with a private cloud.

Sunday, April 14, 2013

Enterprise architecture


Enterprise architecture (EA) is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key requirements, principles and models that describe the enterprise's future state and enable its evolution.[1]


Practitioners of EA call themselves enterprise architects. An enterprise architect is a person responsible for performing this complex analysis of business structure and processes and is often called upon to draw conclusions from the information collected. By producing this understanding, architects are attempting to address the goals of Enterprise Architecture: Effectiveness, Efficiency, Agility, and Durability.


Relationship to other disciplines

Enterprise architecture is a key component of the information technology governance process in many organizations, which have implemented a formal enterprise architecture process as part of their IT management strategy. While this may imply that enterprise architecture is closely tied to IT, it should be viewed in the broader context of business optimization in that it addresses business architecture, performance management and process architecture as well as more technical subjects. Depending on the organization, enterprise architecture teams may also be responsible for some aspects of performance engineering, IT portfolio management and metadata management. Recently, protagonists like Gartner and Forrester have stressed the important relationship of Enterprise Architecture with emerging holistic design practices such as Design Thinking and User Experience Design.[14][15][16] Analyst firm Real Story Group suggested that Enterprise Architecture and the emerging concept of the Digital workplace were "two sides to the same coin."[17]


The following image from the 2006 FEA Practice Guidance of US OMB sheds light on the relationship between enterprise architecture and segment (BPR) or Solution architectures.



Tuesday, October 25, 2011

Exactly what is 'Thinking Outside the Box' ?



A reason we often hear for the need for innovation training is to get "our team to think outside the box."
This may come from the person at the top who feels that the quality of solutions or ideas is not great. This stems from frustration. It also comes from people working in teams who feel that the contribution of others is not helping find new and original solutions to the challenges they face. If you have ever been in this situation, you will know how hard it is to deal with. Perhaps it is best to start with what this term actually means.
I don't know of an official definition for "out of the box" thinking, but here is my perspective starting with "in the box" thinking.

Inside The Box
Thinking inside the box means accepting the status quo. For example, Charles H. Duell, Director of the US Patent Office, said, "Everything that can be invented has been invented." That was in 1899: clearly he was in the box!
In-the-box thinkers find it difficult to recognize the quality of an idea. An idea is an idea. A solution is a solution. In fact, they can be quite pigheaded when it comes to valuing an idea. They rarely invest time to turn a mediocre solution into a great solution.
More importantly, in-the-box thinkers are skillful at killing ideas. They are masters of the creativity killer attitude such as "that'll never work" or "it's too risky." The best in-the-box thinkers are unaware that they drain the enthusiasm and passion of innovative thinkers while they kill their innovative ideas.
They also believe that every problem needs only one solution; therefore, finding more than one possible solution is a waste of time. They often say, "There is no time for creative solutions. We just need THE solution."
Even great creative people can become in-the-box thinkers when they stop trying. Apathy and indifference can turn an innovator into an in-the-box thinker.
In only one case is in-the-box thinking necessary. This comes from a cartoon: a man talks to his cat and points to the kitty litter box. He says, "Never ever think outside the box!"
Outside the Box
Thinking outside the box requires different attributes that include:
  • Willingness to take new perspectives to day-to-day work.
  • Openness to do different things and to do things differently.
  • Focusing on the value of finding new ideas and acting on them.
  • Striving to create value in new ways.
  • Listening to others.
  • Supporting and respecting others when they come up with new ideas.
Out-of-the box thinking requires an openness to new ways of seeing the world and a willingness to explore. Out-of-the box thinkers know that new ideas need nurturing and support. They also know that having an idea is good but acting on it is more important. Results are what count.

Sunday, September 11, 2011

1, 2, 3 Action Plan: Goals that Measure Success

As a department or organization striving to initiate or enhance its grant-writing capacity, it is important to understand your baseline expertise and establish realistic targets. This article identifies a simple 1-2-3 action plan to establish goals that measure success when starting from scratch.

How to Measure Success when Starting from Scratch

If you are questioning how you are going to prove grant writing success when your department has never written a grant, then you are already on the right path. You recognize that a mechanism should be in place to assess success. The first rule is to start simple. Avoid establishing unrealistic targets for staff that may already be skeptical of and resistant to a new grant initiative. To kickstart your plan into action, consider the following 1-2-3 approach to developing your action plan:

1. Identify key personnel
One of the first things to remember when establishing goals as a first-timer is that this is not a brand-new concept; someone else has likely done it before, if not from an organizational position, at least from a programmatic standpoint. Therefore, there is no need to reinvent the wheel. Identify grant professionals within other departments with whom you may be able to consult. If this option is not available, identify key personnel from within your own department who are willing participants. This will also help to establish buy-in and minimize resistance.

2. Brainstorm
When done properly, this can be a prime opportunity to create a sound grant plan that goes beyond the one-time grant win. When setting goals as a first-timer, remember to start simple. You don’t want this to be an overwhelming process. Begin with brainstorming what type of information is already available.

For the grant-active organization, the first task is typically to research the entity’s funding history, but this approach assumes you are at ground zero. Take a dual approach to help focus on the short and long term. Consider both programs that require funding on a short-term basis, as well as programs that have insufficient funding.

eCivis recommends that all programs considered for grant funding be strategically linked to the department-wide and/or organization-wide mission; therefore, a key component of establishing your goals should be selecting programs for grant consideration that are part of a larger strategic objective.

Think about programs for which data is already collected. This will influence your direction and focus in combination with funding priorities. You will likely have anywhere from three to ten priority projects, although you can certainly have more. Other questions to consider include:

• What resources are available to research grants?

• Does staff have the appropriate training to effectively pursue, develop, and manage grants?

3. Establish and monitor goals
Set ambitious, yet achievable expectations. Decide on a combination of short-term and long-term goals.

Remember, the successful grant organization is in this for the long haul. Some sample goals for the active grant department include:

• Projects: Number of grants considered for priority projects (short-term)

• Capability: Number of staff trained on grants (short-term)

• Applications: Number of grant applications submitted (intermediate-term)

• Funding Awarded: Amount of dollars awarded (long-term)

• Win ratio: Number of grants awarded to applications submitted (long-term)

A combination of short-, intermediate-, and long-term goals makes the process more manageable and allows staff and leadership to view progress more easily throughout the grant lifecycle.

Set up a grants management system for collecting and evaluating this information that is transparent and easily shared.

Ensure accountability by assigning responsibilities and deadlines. Periodically review this information with staff. When you reach a goal, announce it, celebrate it, and then consider the factors that led to the success. If you don’t succeed, be up-front. Was the goal too ambitious or the effort too weak? Debrief and ask what could have been done differently or more effectively. Then make plans for next time. Goal setting is a continuous process requiring ongoing monitoring and evaluating, with adjustments to ensure that your new effort actually reaps organizational benefits that can be quantitatively measured.

Start simply by identifying key personnel, brainstorming, and then establishing ambitious, yet realistic goals that will be monitored and evaluated on an ongoing basis. Remember, establishing goals is not simply a matter of sometimes hitting a specific target, but rather achieving a target that leads to improved services and programs or, in this case, improved grant performance year after year.