Author: admin

Page 2

SharePoint Co-authoring: It’s a date

Home Page, SharePoint

Ever been cursed with this pop-up?

LockedWarning

On a Friday afternoon, right before close of business, when you need to get your TPS Report submitted to upper management, this can be the most horrifying message a computer can send you.

(Well, maybe just after the newly designed blue screen of death.)

bluescreen

And, oh joy, Word gives you three options: 1) save a local copy of what’s likely an outdated file, 2) get a notification when the person gets back (after the weekend, amirite!?),  or 3) give up all hope and press ‘Cancel’.

I can’t think of a better example of a lose-lose-lose situation.

So, why does this warning come up? Well, if you store a file—Word doc, Excel spreadsheet, PowerPoint presentation, etc.—in a shared drive or on older (or non-updated) versions of SharePoint, the system limits the number of editors to—you guessed it—one. (This is completely separate from the check in/out process in SharePoint, so don’t mix them up.)

And that’s the reason for this warning. It’s a courtesy (admittedly, that may be an overly nice term) meant to tell you that the file’s currently being worked on—or, more commonly in my experience: forgotten and never saved and closed—by someone else.

As if our lives aren’t stressful enough, I suspect this pop-up puts thousands on the brink of heart attacks every day. Thanks, Microsoft.

The solution

Because it can be a real nightmare to deal with this situation, Microsoft has finally introduced a service that allows you to edit files concurrently with your colleagues. And they call it co-authoring.

Basically, co-authoring allows multiple users to simultaneously edit the same document from multiple PCs. For the record, I hate this name. Because it’s totally misleading. But on we go.

If you’re familiar with Google Docs, concurrent/simultaneous editing has been available for years. So, for many of us, this functionality was a breath of fresh air when it was finally introduced in SharePoint 2013 and Office 2013.

Note, everything in this post relates to SharePoint 2013, Office 2013, and Office 365 (including SharePoint Online). If you’re looking for info on how Office or SharePoint 2010 or 2007 support co-authoring, go here.

I strongly suggest you push your IT department to upgrade if you’re still on these older versions of the software. And if you’re in IT, what’s the hold-up? Your users are waaaaaiting!

But how exactly does it work?

Hey, now that’s a really good question. Because the answer is far from intuitive.

Big picture-wise, it’s simple enough: essentially, you and your buddy Mike (Miguel, Mikhail, whatever) have the same Word doc open. Word tells you that you’re sharing the file with someone else. And you’re both safely editing the file simultaneously. In some instances you can see the edits occurring live in front of you, in others, you’ll see them once your application refreshes the content. If it’s live, you see a little cursor with the name of the person you’re working with. And it’s not just for Word. It also works for PowerPoint, OneNote, and, depending on the situation, Excel.

 

Coauthoring realtime

 

In actuality, it can be rather confusing, especially once you get into conflict resolution. And that’s why some people are deathly afraid of co-authoring. They feel they’ve lost control, that the domino was tipped before they meant to. On the other hand, many others have been wondering why it’s taken Microsoft so long to finally roll this out.

It’s an interesting dichotomy of user preferences, to say the least.

Co-authoring works differently depending on how you’re editing files. It depends whether you’re using Office Online/Web Apps (within your browser) or the actual applications (as in, launching MS Word and opening a document). I’ll refer to the latter option as the “client app” from now on.

It also depends on the app. Co-authoring doesn’t really make sense in some Excel files, so Excel offers limited co-authoring support.

What it does

You and your colleague(s) can open a Word, Excel, PowerPoint, or OneNote file and edit it at the same time. Big-picture, it’s pretty simple. The details, however… are not.

Knowing when you’re co-authoring

Sometimes it’s not the most obvious thing when you’re editing a file concurrently with someone else. Office will tell you, but you have to keep your eyes peeled. It’s way easier to tell if you happen to be on the same page, slide, or worksheet because you’ll see visual confirmation of changes. Otherwise, you’re dependent on little notifications elsewhere on the screen.

  • Using the client app (Office 2013):
    Client app coauthoring word
  • Using Office Online/Web Apps:
    Office Online Graphic

 

When you see updates

  • Using the client app: Whenever you save the file, any changes made by someone else while you had the file open will become visible. Your changes will also be uploaded and become visible to others. In essence, save = update.
  • Using Office Online/Web Apps: Updates are almost live, if you’re both editing through the browser. You’ll note a colored cursor that will display your colleague’s name. The color sticks to the user so you always know which changes are theirs and where they’re at in the file. Basically, update = automatic.
  • When using both: If you’re using Office Online and your colleague is using the client app, they will only see updates you’ve made once they save; likewise you will only see their updates when they save. (Basically, saving acts as a sync mechanism between the client app and SharePoint.)

If you’re editing with more than one colleague, anyone using Office Online will see instant updates made by other Office Online editors. Anyone using the client app will only see updates when they save, and the folks using Office Online will see those edits when they’re uploaded with the “save” from the client app.

Dealing with conflict resolution

I’ll cover this in a future post. It’s a topic unto itself.

Limits of co-authoring

Check out our infographic covering the size and usage limits for SharePoint. It has some info on co-authoring limits.

Coauthoring limits

 

How version history works

This one’s kind of tricky. Assuming you have version history enabled on your document library—because, remember, version history is disabled by default in a new library in SharePoint 2013—you should see the following behavior.

  • Using the client app: A new version is created every time you save the document. Whether it’s a major or minor version depends on the option(s) you or your site owner have chosen in the version history settings for that document library. In essence, save = version.
  • Using Office Online/Web Apps: Even though SharePoint automatically updates the file whenever you make a change when using Office Online—which is why there is no “save” button in Office Online/Web Apps—it doesn’t automatically create a version whenever it saves. Microsoft claims new versions are created every 30 minutes after someone begins making changes to the file. However, actual usage proves this to be inaccurate. Sometimes versions are made every few minutes, and usually the editor that’s “credited” with the version is the one who opened the file before anyone else jumped in. I have no idea why the tool acts differently than what their documentation says. But it definitely does.
    • In SharePoint Online (Office 365), you’re stuck with that time interval (30 min, even if it is inaccurate); in SharePoint 2013 on-premises, your IT department can change this. (Source) So, unlike using the client app, save ≠ version.
    • But! You can force-create a version by checking the file out and checking it back in when you’re done editing. Details are here.

Pro tip: you should brush up on how version history works, and what the differences between using major and minor drafts are.

Working with check in/out

Co-authoring and check in/out are mutually exclusive concepts. That said, you can co-author on a file that lives in a library where check in/out is enabled. But…

  • If “Require Check Out” is enabled on your document library, co-authoring is not available. (Source)
  • If check in/out is enabled (but not required), co-authoring can only occur when files are checked in. (Source)

I strongly advise not using check in/out if you want to make use of co-authoring. It just doesn’t make much sense to use both.

Excel and co-authoring

Excel doesn’t have the best relationship with co-authoring. In fact, in my experience, Excel doesn’t have the best relationship with SharePoint in general. This is a topic of its own, so I’ll publish a separate post on this topic in the future.

For best results, use Office Online

No, seriously. If you want to see changes occurring live, or just-about-live, you need to be editing the file in the browser. But, there’s a downside. A good amount of the functionality that you expect in the Office applications isn’t supported in Office Online.

Most notably, Track Changes is missing from Word Online. It’s just not available yet, which blows my mind because it’s kind of the most fundamental of collaborative tools.

Additionally, unless you’re working on a simple table in Excel, you’ll likely not want to use Excel Online: connections between worksheets and macros are just two examples of functionality that will not work in Excel Online (in fact, you get an error when opening these types of files). So while you may be able to see live edits being made, you’re going to be disappointed in every other way.

You can use the client applications, but the updates are delayed. At the rate that I save my files (not often enough), you could have written half of a book and I wouldn’t know it until I hit ‘Save’. Call me a bad user. But that’s a minor issue.

 

by Matt Wade March 31, 2016

Simple service request process

Projects

In smaller projects, a process needs to be established for gathering Service Requests and assigning them to team members based on client priorities. Here’s a process that can be used for each request.

It stands to reason that smaller projects don’t need the same level of project management discipline as larger projects. With a small project, it’s easy to define the work, easy to manage the activities, and there usually isn’t much work associated with managing risk, quality, communication, scope, etc.

In many organizations, a simple service request process is used to manage these small projects. This service request process starts off by defining the work to be done on a simple one- or two-page form — aptly enough called a “Service Request” form.

The process for assigning the work is different as well. When the work definition for a larger project is completed, the project is usually ready to begin. However, for smaller efforts, there may be many more Service Requests than can actually be worked on at any given time. Therefore, a process needs to be established for gathering Service Requests and assigning them to team members based on client priorities. The following Service Request Process can be used for each request:

Client submits the request. The client completes a simple Service Request form that documents the work requested.
Project manager review. The project manager reviews the Service Request to ensure that the work is understood. The project manager asks questions of the client if necessary, to clarify what is being requested.
The effort, cost and duration are estimated. The project manager provides a high-level estimate of the effort hours, duration and cost, and adds this information on the Service Request. (If the project manager can’t estimate the work, they assign to a team member to create the estimates.) When the work is actually assigned, a more detailed estimate can be prepared if necessary.
The request is assigned or backlogged. The project manager and client evaluate the request against the other work that is assigned and on the backlog. They also review the available capacity and skills on the team to determine if the work can be started immediately. If the required resources are not available, or if the work is of lower priority than other Service Requests, the new request is placed on a backlog list.
Periodically review the backlogged work. The project manager and client review the backlog on a regular basis, probably weekly or bi-weekly. During this review, requests on the backlog should be reprioritized. When the priority of a Service Request is high enough and the right resources are available, the work can be assigned to begin.
Revalidate the initial information. When the work is assigned to begin, the person(s) doing the work should validate that the information on the Service Request is correct and that the estimates are accurate. If they aren’t, the new information should be documented and discussed immediately to see if it will have an impact on the priority.
Execute the work. The actual execution of the work begins. This would follow a typical short lifecycle for a small project.
Manage the work. Since the request is small, the project manager will manage the work as needed.
Close the work. When the work is completed, the client should signify their approval. The Service Request should then be moved to a closed queue that tracks these requests for historical purposes.

 

By Tom Mochal | August 2, 2005

Microsoft SharePoint vs Google Docssites

Google Apps for work, Home Page, SharePoint

I recently fielded a question from a potential client who wanted to know if Microsoft SharePoint or Google Docs with Google Sites was a better fit for their organization’s document management and collaboration needs. Although this question is pretty straight-forward, the explanation can get a bit complicated.

The short answer is Google Docs/Sites is great tool if you do not have an enterprise collaboration platform at your disposal, and you need to to get a document sharing site up quickly. However, Google Docs/Sites falls way short in providing the breadth and depth of features that Microsoft SharePoint offers.

SharePoint is a true enterprise platform with capabilities that extend beyond document management and collaboration (e.g. Search, Workflow, and KPI Dashboards. If you have a dozen or more computer users in your organization who need tools other than email and network drives to collaborate, you should strongly consider an investment in the SharePoint platform. Microsoft even offers a free version of SharePoint for small and medium sized organizations (less than a few hundred computer users), along with premium versions for larger organizations – sharepoint-deployment-planning-services.

Analysis Notes

Document Management

From a document management perspective, Microsoft SharePoint and Google Docs have compelling offerings. Both provide a browser-based user experience for managing documents in a central location and keeping track of a document’s version history.

SharePoint includes a wider variety of document management features than Google Docs, including:

  • Metadata tagging to help you organize and find documents quickly
  • Check-in/Check-out to prevent multiple users from editing a document at the same time
  • Document sets which allow a group of related documents to be treated as a single piece of content that share metadata and version history
  • Records management for managing the lifecycle of documents and providing for the ability to place documents into a legal hold state
  • Provides for the ability to trigger workflow processes (e.g. approval/publishing of content) whenever a document is added, changed, or removed

Google Docs may be a better fit than SharePoint in some circumstances:

  • Google Docs is quite a bit easier to setup and configure than SharePoint, so you should be able to get started in less time
  • Organizations with only a simple need to share documents may find Google Docs easier to use
  • Google Docs is often a good fit for organizations with ad-hoc teams that must be brought together quickly (especially when team members hail from different organizations)
  • Google Docs will likely cost substantially less to implement than SharePoint

If you are looking for a document management solution that supports day-to-day employee and interdepartmental document sharing as well as special projects, then SharePoint will be a better fit for your organization in the long-run. If you just need a quick and dirty solution for an ad-hoc project, then Google Docs is probably a better way to go.

Collaboration

SharePoint and Google Docs with Google Sites are pretty far apart on the maturity scale – SharePoint has been around for over 10 years and is a pretty stable solution for the Enterprise; Google’s Docs with Sites were released less than 4 years ago which is evidenced by a few bugs that bite from time to time.

Both SharePoint and Google product suites include document management systems and the ability to create collaboration sites, but SharePoint includes quite a few additional features. SharePoint is often referred to as a Swiss army knife of collaboration and office productivity features.

Feature Comparison

SharePoint features that are absent from Google’s offering include:

  • Flexible collaboration site templates and structures provide the ability to meet varying business needs of different departments and teams
  • Workflow to automate and manage business processes
  • Enterprise search capabilities to index content on your network drive (as well as the content you store inside SharePoint)
  • Configurable lists to capture metadata when storing documents
  • Centralized task lists to replace spreadsheets (great for managing projects)
  • SharePoint dashboards can integrate data from other systems to track your Key Performance Indicators
  • Tight integration of Documents, Tasks, and Calendars with the Microsoft Office Suite (e.g. updates made in Outlook, Word, Excel, and PowerPoint will automatically update the central copy inside SharePoint)

Permission Management

SharePoint and Google offerings also differ significantly when it comes to permission management. Google has limited permission management, allowing you to only define who can view content and who can edit content on each site. In SharePoint you have a lot of flexibility regarding the granularity of permissions – within a SharePoint site you can allow people to view some of the content, but not all. Similarly, you can allow people to modify some pieces of content, but not all content. Permissions are also easier to maintain in SharePoint. Access rights for Google Sites and Google Apps are maintained separately, which can sometimes overlap and lead to some confusion or surprise over who has the ability to access or edit content.

Market Share

The organization adoption level for Google sites is pretty tiny when compared to the adoption level of SharePoint. Google Docs had a few notable customers switch from MS Office (Word/Excel/PowerPoint) to Google Docs, but Google Sites hasn’t really taken off yet.

Conclusion

Overall, SharePoint is still a category killer and the clear winner when it comes to document management and collaboration solutions. For good reason, there are over 100 million users of SharePoint world-wide!
Originally Posted by Rick Rietz

Three Steps to Issues Management

Home Page, Projects

Issues are large problems that are impeding the progress of the project. Issue Management is the process of identifying and resolving issues within a project. By quickly and efficiently managing issues, you can:

  • Limit the effects of unforeseen events on the project
  • Reduce the time spent administering project issues
  • Greatly improve your chances of project success

You can complete the issue management process by taking three simple steps:

Step 1: Identify the Issue
Any member of the project team may identify a new project issue. An is completed to describe the issue and its impact on the project. The actions required to resolve the issue are also identified.
At this point the Project Manager also needs to determine who needs to be involved in resolving the issue.

Step 2: Investigate and Prioritise the Issue
The Issue Form is then forwarded to the Project Manager, who investigates the issue and determines the overall issue priority. The priority of the issue is determined by its impact on the project’s ability to achieve its stated objectives. If the issue is severely impacting the project, then it is assigned a high priority.
When determining the issue priority, the Project Manager considers whether the:

  • Deliverables listed in the scope are affected
  • Quality targets are affected
  • Schedule end-dates are affected
  • Resources are affected
  • Budget is affected

The Project Manager and project team need to recommend the actions to take to resolve the issue.

Step 3: Resolve the Issue

The Project Manager takes the issue, and the recommendations for resolving the issue, to the people that need to resolve the issue.

  • These people were identified earlier.
  • These people need to make a decision on how to resolve the issue.

The Project Manager is then responsible for scheduling and implementing these actions and reviewing the issue on a regular basis to ensure that it has been resolved accordingly. Throughout the Issue Management Process, the Project Manager can monitor and control issues impacting the project by keeping the up-to-date.
……………….
By completing these three steps for each issue that arises, you will be able to minimize the effect that issues have on your project and thereby increase its chances of success.

How to Plan in an Agile Environment

Home Page, Projects

Have you ever heard something like: “Agile development expects that we can operate quickly, efficiently and effectively without necessarily having an overall strategic plan”? If so, forget it! Why? Try to build a house from the roof and see what happens!

Detailed planning is as essential to the effectiveness of Agile as it is to Waterfall. The difference between the two approaches lies in timing. Planning is ongoing in Agile, and its incremental approach allows projects to adapt rapidly to changes. Planning is predictive in Waterfall and the client knows exactly what to expect.

Predictive vs Adaptive Projects

Predictive projects are the ones in which the needs are not likely to change (at least we wish that they don’t change) and where the work being done can be “somewhat” identified and measured. Adaptive projects are those where requirements are likely to change – frequently in some cases – and the work is largely creative with high levels of uncertainty.

Categorize the Project

How can we correctly categorize our project? Here are some questions to guide us.

  • Do we have requirements not well understood in advance and that are likely to change over the life of the project?
  • Do we have a domain not well known in all its details?
  • Are our teams trying to solve problems where the solution requires invention?

If we answered positively to the preceding questions, we’re probably facing with an adaptive environment; we should try to get an Agile approach to manage that kind of project, and the reason is that Agile provides the needed feedback mechanism to deliver adaptive projects.

How Much Will it Cost?

Okay, we decided to manage our project using an Agile approach. At the first progress review meeting, the management wants to know, “How much will it cost and how long will it take?” Organizational resources need to be allocated to the project. We need to provide feedback at a number of levels to answer the initial set of questions to ensure we are still delivering the best value for our organization’s investment. To effectively answer and satisfy our management, we need to provide them a plan.

Doing the Right Work and Doing the Work Right

In the initial phase of the project, when it’s not a real project but just an initiative to be funded, portfolio and program planning are needed to “do the right work”; in this phase planning is a high-level activity used to decide which initiative deserves to be funded.

When the initial filter has been passed and the initiative has been funded, the detailed activities provided by the analysis of the requirements has to be planned and this is about making sure that the team delivers against the product vision to provide business value on an iterative and incremental basis.

The Product Vision

The product vision is a really strategic means that clarify to the team why they are working, what they are working on and what key constraints they must work within. The vision is often detailed in a “Project Charter” document.

As an advice to get an “effective” charter, write it down with the whole team in a collaborative workshop with the project sponsor and product owner present. If the team members who will work on the product are not in the office, let’s get them together for the workshop in order to create a “one team” culture (it’ll be good also as a team building exercise).

If we get someone onboard after the vision has been created, let’s provide him/her a complete “training” using the project charter as a guide to help him/her understand the drivers behind the work being undertaken.

Typical contents of the project charter include:

  • Problem/opportunity description
  • Why should we deliver this project? What’s the value for our organization?
  • How does the project align with the organization’s strategic goals?
  • What are we trying to deliver? – let’s provide a high level solution description
  • Key features of the product that we’re going to deliver
  • Assumptions – where appropriate – and constraints
  • Scope – what’s included and above all what’s excluded from our responsibility?
  • Key timelines to deliver
  • Budget and cost-benefit analysis
  • Subordinated and/or related projects – let’s provide the key milestones of our project for alignment with those projects
  • Risks with mitigation actions – where appropriate

There are some techniques that can be used to organize the “collaborative workshop” that will produce the product vision. A few are listed below:

The Elevator Statement

The “elevator statement”, also called the “elevator pitch”, is something that will enable any team member to explain the purpose of the project (a statement composed by a few sentences to explain the goals and objectives of the project) in the time it takes to ride between floors in an elevator (imagine you get into the elevator with the CEO of your company and you are asked to explain what the project is you are working on before the elevator gets to their floor).

Business Benefits Matrix

A simple matrix which articulates the strategic value that the product is intended to provide. The matrix looks like the following table:

business benefits matrix

There should only be one primary driver and there might be a number of secondary or tertiary goals. Where there is more than one goal in a column then they need to be ranked to avoid the “everything’s critical” conflict.

If we got this matrix filled up as a group activity, we’d help the team to understand the focus for the project.

Scope Matrix

We could use a simple “in/out list” to define what will be done, not be done and where there is uncertainty about deliverables. When we place an item in the “in” column, we state that the team is responsible for its delivery. When we place an item in the “out” column, we state that the team will not spend any time or effort on it.

When we are uncertain about an item being in scope or not, then we place it in the “undecided” column and the project manager needs to investigate further to identify if the team have to place it in the “in” or “out” column.

The Product Roadmap

At the beginning of our initiative we have to produce a product roadmap, a list of the most important features that the product will deliver to address the scope. Of course if we expect to deliver just one release of the product, the product roadmap will coincide with the release plan.

The product roadmap is a high-level plan maintained by the product owner and the project manager that is expected to change over time and is validated against the product vision (let’s remember to plan also the validation events).

The Release Plan

The release plan consists in the list of features that we’re going to deliver in the next release of the product. Of course the features included in the release plan are the ones that the product owner and the project manager agreed to release based on the prioritization made at the level of epics and stories.

A release consists of a number of iterations during which the team will deliver measurable value to the organization.

Stories and epics need to be sized (in story points or ideal days) and prioritized so that the work can be allocated to iterations.

Let’s see how this happens:

  1. The product owner clarifies the goals for the next release.
  2. The team discusses the needed features to address the goals.
  3. The team discusses all factors that can influence the goals including risk and dependency between epics and stories. High risk-high value features should be taken home earlier than the others in order to easily adapt whether the risk evolves into a problem.
  4. We plan the activities based on the team’s velocity. We could use an estimated initial team’s velocity as the starting point for planning the activities. For planning the activities for the next releases, of course, we should use the actual velocity and not the estimated initial one (unless the team changes significantly between releases). The simplest approach to guess the initial velocity is to ask our team members how many story points they think they can deliver during an iteration and plan based on that number. It will probably be wrong, but it could be a good enough starting point. 

Once we have the estimated velocity, it’s time to plan the iterations:

  1. List the stories and epics for the release in priority order with their size.
  2. Decide how many story points we can deliver in a single iteration – please consider the impact of idle time used for example to prepare the tools needed to work.
  3. Add an iteration to the plan.
  4. Add stories to the iteration till reaching the maximum capacity.
  5. Share the plan and ask for feedback to get the highest commitment.

Let’s remember that the release plan is an adaptive plan and will change whenever we need.

The Iteration Plan

Every Agile team should be able to plan the activities included in an iteration based on the lessons from the work done in the previous iterations. They use the iteration planning to validate the release plan and produce the detailed iteration plan.

We should organize one or more sessions of iteration planning to analyze the release plan and update it based on any changes happened since the last update.

Changes could happen, for example, due to:

  • velocity of the work delivered in previous iterations;
  • priority changing in stories or epics; 
  • introducing new stories and epics (e.g. as a result of a new risk identification); or
  • recovering idle time that reduces the team’s ability to complete tasks.

During the first part of the meeting, the product owner explains the actual priorities and the team revises the iteration plan for identifying the stories to be done in the actual iteration.

The amount of stories included in the actual iteration will be based on “yesterday’s weather”, that is the team’s velocity based on the amount of work done in the previous iteration.

The reason behind the choice of establishing the velocity based on the amount of time needed to complete the previous iteration is that it’s very likely that the team will complete the same amount of work as they did before, unless they have been changed a lot or the working environment has been changed significantly.

The second phase of this meeting will consist in breaking the work down into specific tasks to be “pulled” by every team member during the next iteration, based on his/her ability to do the task in the amount of time initially estimated to complete it. Let’s keep the tasks’ size very small – from a few hours to a day or so.

Now it’s time to allocate the tasks and confirm the commitment of all team members and this is the due of the “iteration manager” (in Scrum he/she is the Scrum Master).

Let’s write the tasks down onto cards (e.g., colored post-it) and hang them up on a large and visible surface (e.g., the wall in the open space where the whole team can see it).

Let’s provide the team with an Iteration Backlog on which all members can find the stories and epics included in the actual iteration.

We could track the progress of all the tasks on a grid placed on the same wall where we placed the task cards. On the grid we could write down the task, who is responsible for completing it, the estimated time to complete it, the remaining hours to complete it and the actual hours used to complete it (so we could measure if we are late or not on our schedule).

The last grid should be completed by every team member to track his/her work against the tasks and represents their daily commitment.

To check our progress, we should use the so-called burn-down chart that is a graph showing the initial estimate and the remaining effort for the iteration.

The burn-down chart can be used by the team to try to improve their estimation skills in the next iteration planning meeting.

The “Daily Standup” Meeting

Daily meetings are the most important and effective tool used for communicating the progress within the iteration. Every day the whole team gets this meeting and the iteration manager asks the status of all tasks assigned to every team member.

There are a few simple rules to be taken into account and strictly followed by each member.

  • It is held standing up and the maximum duration is 15 minutes (can you afford a standup meeting held for more than 15 minutes? Let me know if you can afford it every day…)
  • No more than one minute for each member to expose the status of the task assigned to him/her.
  • No deviation from the actual story / task and the status can be “Done” or “Not Done” – it’s not valid to say “I’m at 75%”; in that case, it’s “Not Done” and the member has to declare how many hours he/she needs to complete it.
  • If a member has any obstacle to pass, he/she will declare it on a separate basis after the meeting.
  • Each team member answers just to the following questions:
    • What have you done yesterday?
    • What will you do today?
    • What’s in your way? (“Nothing” means you’re going to complete the tasks respecting the estimated time without any expected obstacle.)

The iteration manager is responsible for removing all the obstacles declared by the team members (after the standup meeting) so the team can be fully productive.

Just one last rule: we have to keep in mind that we shouldn’t punish our team members for not meeting task commitments because otherwise our task force will adapt to the behavior and won’t tell the truth about the status of the tasks during the standup meeting.

We have to consider that sometimes we will estimate badly or something will happen that prevents someone from working on their task but that isn’t a bad thing, it should be a motivation to improve our skills.

When planning in any methodology you’ll want a tool robust enough to cover all the bases, including monitor that plan and then reporting on it. That’s where ProjectManager.com comes in. Its online and collaborative suite of software provides a project leader with the tools to do the job. Try it yourself with this free 30-day trial.

MoSCoW Prioritisation

Home Page, Projects

10. MoSCoW Prioritisation

10.1 Introduction

In an Atern project where time has been fixed, understanding the relative importance of things is vital to making progress and keeping to deadlines. Prioritisation can be applied to requirements, tasks, products, use cases, user stories, acceptance criteria and tests. MoSCoW is a technique for helping to understand priorities. The letters stand for:

  • Must Have
  • Should Have
  • Could Have
  • Won’t Have this time

The reason we use MoSCoW in Atern is that the problem with simply saying that requirements are of High, Medium or Low importance is that the definitions of these priorities are missing. Using MoSCoW means that priorities are specific. The specific use of Must, Should, Could or Won’t Have implies the result of failing to deliver that requirement.

10.2 The MoSCoW Rules

These are some possible definitions of what the different priorities mean. It is important to agree the definitions with the users. Preferably this is agreed before the requirements are captured – i.e. before it becomes emotive.

10.2.1 Must Have

These provide the Minimum Usable Subset (MUS) of requirements which the project guarantees to deliver. This may be defined using some of the following:

  • Cannot deliver on target date without this
  • No point in delivering on target date without this; if it were not delivered, there would be no point deploying the solution on the intended date
  • Not legal without it
  • Unsafe without it
  • Cannot deliver the Business Case without it

Ask the question, “what happens if this requirement is not met?” If the answer is “cancel the project – there is no point in implementing a solution that does not meet this requirement” then it is a Must Have requirement. If there is some way round it, even if it is a manual workaround, then it will be a Should Have or a Could Have requirement. Downgrading a requirement to a Should Have or Could Have does not mean it won’t be delivered, simply that delivery is not guaranteed.

10.2.2 Should Have

  • Important but not vital
  • May be painful to leave out, but the solution is still viable
  • May need some kind of workaround, e.g. management of expectations, some inefficiency, an existing solution, paperwork, etc.

A Should Have may be differentiated from a Could Have by reviewing the degree of pain caused by it not being met, in terms of business value or numbers of people affected.

10.2.3 Could Have

  • Wanted or desirable but less important
  • Less impact if left out (compared with a Should Have)

10.2.4 Won’t Have this time

These are requirements which the project team has agreed it will not deliver. They are recorded in the Prioritised Requirements List where they help clarify the scope of the project and to avoid being reintroduced ‘via the back door’ at a later date. This helps to manage expectations that some requirements will simply not make it into the delivered solution, at least not this time around.

10.3 Ensuring effective prioritisation

10.3.1 Agreeing how priorities will work

Prior to requirements capture, the definitions of Must Have, Should Have, Could Have and Won’t Have need to be agreed with the business. Some examples are described above. However, the Must Have definition is not negotiable. Any requirement defined as a Must Have will have a critical impact on the success of the project. The Project Manager or Business Analyst should challenge requirements if they are not obvious Must Haves; it is up to the Business Visionary or their empowered Business Ambassador to prove a requirement is a Must Have. If he/she cannot, it is a Should Have at best.

Agree escalation or decision-making processes, e.g. Business Ambassador to Business Visionary to Business Sponsor, and agree the level of empowerment around decision-making at each level.

\At the end of an increment, all unsatisfied requirements are reprioritised in the light of the needs of the next increment. This means that, for instance, a Could Have that is unsatisfied in an increment may be demoted subsequently to a Won’t Have, because it does not contribute enough towards the business needs to be addressed next.

10.3.2 The Business Sponsor’s perspective

The MoSCoW rules have been cast in a way that allows the delivery of the Minimum Usable Subset of requirements to be guaranteed. Both the team and those they are delivering to can share a confidence in this because of the high degree of contingency allowed in the delivery of the Must Haves. A rule of thumb often used is that Must Have requirements do not exceed 60% of the effort. If this rule is followed, then that ensures contingency represents at least 40% of the total effort.

So is this all that the Business Sponsor can expect to be delivered? The answer is an emphatic “No”. Whilst understanding that there is a real difference between a guarantee and an expectation, the Business Sponsor can reasonably expect more than this to be delivered except under the most challenging of circumstances. This is where the split between Should Haves and Could Haves comes into play.

If the Should Haves and Could Haves are split evenly with 20% of the total effort associated with each then the Musts and Shoulds, in combination, will represent no more than 80% of the total effort. The remaining 20% of effort associated with the Could Haves is now the contingency available to protect the more important requirements. By most standards this is still a very reasonable level of contingency and rightly implies that the Business Sponsor can reasonably expect the Should Have requirements to be met. It is just that, quite understandably, the team does not have the confidence to make this a guarantee. So, sensible prioritisation, combined with timeboxing leads to predictability of delivery and therefore greater confidence. Keeping project metrics to show the percentage of Should Haves and Could Haves delivered on each increment or timebox will either re-enforce this confidence, if things are going well, or provide an early warning that some important (but not critical) requirements may not be met if problems arise.

10.3.3 MoSCoW and the Business Case

The best way to address prioritisation initially is with a quantified Business Case. This should support Feasibility and be revisited during Foundations. If a Business Case does not exist, the Business Sponsor and Business Visionary need to articulate the business drivers, preferably in a quantified form. Some practitioners believe that any requirement contributing to the Business Case should be defined as Must Have, others accept that a small reduction in benefit is unlikely to make a project completely unviable and desire a more pragmatic solution. These practitioners believe that it is sensible to allow the requirements contributing to the Business Case to span Must Have and Should Have requirements.

Figure 10a: MoSCoW and the Business Case It is likely that contractual relationships (whether formally between organisations or informally within an organisation) will influence the decision on this issue one way or the other.

10.4 Levels of priority

MoSCoW prioritisation is really only meaningful in a specified timeframe and the same requirement may have a different priority in that context. A Must Have requirement for the project as a whole may not be a Must Have for the first increment. For example, even if a Must Have requirement for a computer system is the facility to archive old data, it is very likely that the solution could be used effectively for a few months without this facility being in place. In this case, it is sensible to make the archive facility a Should or a Could Have for the first increment even though delivery of this facility is a Must Have before the end of the project. Similarly, a Must Have requirement for an increment may be included as a Should or a Could Have for an early Development Timebox. Many consider this approach to be sensible as it allows the more important requirements to be addressed earlier rather than later but, if taking this approach, beware the risk of confusion. Each deliverable effectively has two or even three priorities in different timeframes and the Project Manager needs to ensure that the team do not lose sight of the real business priorities. The best way to deal with this is to create a Timebox PRL, a subset of the Project PRL that is specifically associated with a timebox and leave the priorities unchanged on the main PRL for the project.

10.5 What to prioritise

Every item of work has a priority. Priorities are set before work commences and kept under continual review as the work is done. As new work arises either through introduction of a new requirement or through the exposure of unexpected work associated with existing requirements, the decision must be made as to how critical they are to the success of the current work using the MoSCoW rules. All priorities should be reviewed throughout the project to ensure that they are still valid.

10.6 How many of each priority?

When deciding how much effort should be Must Have requirements, bear in mind that anything other than a Must is, to some degree, contingency. The aim is to get the percentage effort for Must Haves (in terms of effort to deliver) as low as possible and to be wary of anything above 60%, i.e. 60% Musts Haves, 40% Should Haves and Could Haves. Won’t Haves are excluded from the calculation, as they won’t be part of this project/increment/timebox. Levels of effort above 60% for Must Haves introduce a risk of failure, unless the team are working in a project where estimates are known to be accurate, the approach is very well understood and the environment is understood and risk-free in terms of the potential for external factors to introduce delays.

10.7 Hierarchies of priorities

Requirements are identified at various levels of detail, from a high-level strategic viewpoint (typically at Feasibility) through to a more detailed, implementable level (typically during Exploration and Engineering). High-level requirements can usually be decomposed and it is this decomposition that can help resolve one of the problems that confront teams: all requirements appear to be Must Haves. If all requirements really were Must Haves, the flexibility derived from the MoSCoW prioritisation would no longer work. There would be no lower priority requirements to be dropped from the deliverables to get the project back on time and budget. In fact, this goes against the whole Atern ethos of fixing Time and Resources and flexing Features (the triangles diagram).

Believing everything is a Must Have is often symptomatic of insufficient decomposition of requirements. A high-level Must Have requirement frequently yields a mix of sub-requirements, each with a different priority. Flexibility is once more restored and some of the detailed functionality can be dropped from the delivered solution so that the project deadline can be met. Where a requirement has a Must Have below a Should Have for example, this would signify that if this requirement were to be delivered, it must have the lower level requirement to be acceptable.

10.8 Tips for assigning priorities

  1. Work closely with the Business Visionary to ensure they are fully up to speed as to why and how Atern prioritises requirements.
  2. Start all requirements as Won’t Haves and then justify why they need to be given a higher priority.
  3. For each requirement that is proposed as a Must Have, ask: “What happens if this requirement is not met?” If the answer is “Cancel the project. There is no point in implementing a solution that does not meet this requirement,” then it is a Must Have requirement.
  4. Ask: “I come to you the night before deployment and tell you there is a problem with a Must Have requirement and that we can’t deploy it – will you stop the deployment?” If the answer is “yes” then this is a Must Have requirement.
  5. Is there a workaround, even if it is manual? If there is, then it is not a Must Have requirement. Compare the cost of the workaround with the cost of delivering it, including the cost of any associated delays.
  6. Ask why is the requirement needed – for this project and this increment.
  7. If there is a Business Case in sufficient detail, can it be used to justify the intended priority? If not, create one.
  8. Is there more than one requirement implied in a single statement? Are they of the same priority? Decompose the requirement!
  9. Is this requirement dependent on any others being fulfilled? A Must Have cannot depend on the delivery of anything other than a Must Have because of the risk of it not being there.
  10. Allow different priorities for levels of acceptability of a requirement. For example. “The current back-up procedures will be followed to ensure that the service can be restored as quickly as possible.” How quick is that? Given enough time and money, that could be within seconds. They may say that it Should happen within four hours, but it Must happen within 24 hours, for example.
  11. Can this requirement be decomposed? Is it necessary to deliver each of those components to fulfil the requirement? Are the decomposed elements of the same priority as each other?
  12. Tie the requirement to a project objective. If the objective is not a Must Have, then probably neither is the requirement relating to it.
  13. Remember that team members may cause scope creep by working on the fun things rather than the important things. MoSCoW can help avoid this.
  14. Does the priority change with time? For example, for an initial phase, it is a Should Have but it will be a Must Have for the second increment.
  15. Prioritise defects/bugs, using MoSCoW.
  16. Prioritise testing, using MoSCoW.
  17. Use MoSCoW to prioritise your To Do list. It can be used for activities as well as requirements.

10.9 Summary

MoSCoW (Must Have, Should Have, Could Have, Won’t Have this time) is primarily used to prioritise requirements, although the technique is also useful in many other areas. Atern recommends no more than 60% effort for Must Haves for a project, with 40% Shoulds and Coulds. Anything higher than 60% poses a risk to the success and predictability of the project, unless the environment is well understood, the team is established and the external risks are minimal.

Single Sign On – OneLogin

Home Page, O365, Uncategorized

Adoption Drivers

With a goal of becoming a billion dollar in revenue company by 2017, DISYS continues to grow rapidly and expand globally. After implementing several cloud-based enterprise applications such as SalesForce, Office 365 and BMC Remedyforce, the company was looking for an identity management solution that was scalable and could accommodate the addition of new systems.

“There was one that SalesForce offered, and some other pretty crude upload/download methods that were not automatic,” said Hachwi, the IT infrastructure manager responsible for desktop, network and voice services, as well as systems architecture. Hachwi’s team looked to Netflix for guidance. “We’re trying to really mirror how Netflix enterprise IT operates and push new applications and infrastructure to the cloud,” said Hachwi. “We don’t want to maintain a large internal infrastructure with a large team just to manage hardware. DISYS is continuing to grow, and we need to get the infrastructure in place to support that growth quickly. IT can’t waste time and valuable resources procuring, configuring and maintaining hardware and network infrastructure. ”

BYOD Policy for Dispersed Workforce a Must

With 650 employees and roughly 4,000 consultants working at client sites, it was important to provide access to corporate applications via the web. Additionally, as DISYS expands internationally, supporting consultants will require secure, reliable access to the corporate network and all necessary business applications, regardless of the user’s location or access device.

Having consultants spread out geographically presents a unique challenge, since deploying and managing laptops to all of them would be way too costly and time-consuming, let alone present a huge security risk. “Our focus should be maintaining our systems, the data, and access to the data, while our consultants manage the hardware,” said Hachwi.

Keeping Infrastructure Simple

DISYS already had a datacenter in place, with plenty of expansion capability and compute and storage resources. “We had the datacenter because we run PeopleSoft, and we want to keep that in-house,” said Hachwi. “But as we added more applications into the mix to resell as services to our clients, we wanted to keep the infrastructure simple and easy to scale quickly.”

Maintaining the infrastructure to support additional applications such as Office 365 and others would have been too big of a burden. “We would have had to spin up two or three more servers, maintain firewall rules, manage updates, and support everything internally.” said Hachwi.

Rapid, Secure Access, Without Rework

It was important to have access to the apps, because Hachwi’s team needed to show their clients they were using them and demonstrate best practices. “It’s our responsibility to provide access to the apps in a timely manner,” he said.

Although Hachwi’s team considered deploying Okta, the platform didn’t offer the integration capabilities that they found with OneLogin.

Why OneLogin?

SOLUTION

OneLogin provides the fastest path to identity management in the cloud with an on-demand solution consisting of single sign-on, multi-factor authentication, directory integration, user provisioning and a catalog with thousands of pre-integrated applications. !

Fast, Painless Deployment

Deploying OneLogin was quick and painless. “All the directory integration was already done,” said Hachwi. “I think it took us 30 minutes total. Using AD-FS would have had a much larger impact, and it would have affected our disaster recovery setup strategy, as well. We would have to set up and maintain those servers, as well as back them up. OneLogin eliminated all that hassle and made it really easy.” !

Rapid Integration with Web Apps

OneLogin provides access control by connecting to Active Directory or LDAP servers directly; no firewall changes are necessary. Up-front integration work already built-in to the platform provides near instant connectivity to business-critical cloud applications, without rework. Users enjoy one-click access to all web apps from a browser or mobile device. Additionally, strong authentication policies such as PKI certificates, OneLogin’s free Mobile OTP or third-party authentication vendors ensure secure access. !

“OneLogin fits really well into our infrastructure, with easy setup and configuration, and the ability to customize rules and user roles,” said Hachwi. “It makes application deployment simple and streamlined for our team to manage and gives our dispersed employees and contractors secure application access at the click of a mouse.”

RESULTS

Fast toolset integration helps DISYS stay on top of its projects. “When you start growing so quickly, things can start breaking, and you have to keep up,” he said. “OneLogin helps us because we don’t have to spend time and effort on application deployment.”

Hachwi said the Office 365 roll out is the best example. “Using ADFS integration would have taken us a minimum of a week getting everybody set up and tested, and then the maintenance to keep it going would have really taxed my team. OneLogin boiled all that down to 30 minutes. It can’t get any simpler than that.”

As Hachwi’s team adds new toolsets, OneLogin will be the enabling technology moving forward. For example, the team was able to get BMC RemedyForce up and running on top of SalesForce in just days, because the integration was already in place.

“Our goal is to enable our users and to deploy solutions as quick as possible,” he said. “When we consider adding another tool, we look at the integration into OneLogin as part of the decision process.”

DISYS Uses OneLogin to Give 4000+ Employees and Consultants Secure Access to Office 365 and other Web Apps on Any Device

DISYS Uses OneLogin to Give 4000+ Employees and Consultants Secure Access to Office 365 and other Web Apps on Any Device

Collin Hachwi, IT infrastructure manager at Digital Intelligence Systems (DISYS), supports the company’s team of more than 650 employees and 4,000 independent consultants around the globe—many of whom are remote workers who use their own desktops, laptops, tablets or smartphones to conduct business. DISYS, a global managed staffing and services company, utilizes cloud-based enterprise applications such as SalesForce to streamline many of its business processes, but when it came time to add new applications into the mix, Hachwi knew the company needed to consider a strong identity management solution that was scalable to accommodate the addition of new applications and users. “Using ADFS integration would have taken us a minimum of a week getting everybody set up and tested, and then the maintenance to keep it going would have really taxed my team. OneLogin boiled all that down to 30 minutes. It can’t get any simpler than that.”

Collin Hachwi

IT Infrastructure Manager

6 CRM predictions for 2016

CRM, Home Page, Strategy

So what will be the big trends in CRM in 2016? Here are six predictions.

CRM software will become even more social. “In 2016, we’ll see a lot more CRM providers adding new social media features, whether that be tracking customer interactions or suggesting new contacts,” says Marc Prosser, cofounder, Fit Small Business. “Nimble is out ahead on this, but expect others to add these features while their team (and others) devise new ways CRM can take advantage of social media.”

Mobile CRM will become a must-have. In 2016, “we’ll see CRM go mobile in a big way,” says Prosser. “So far, most mobile CRM apps have focused on providing a basic phone-ready version of the desktop version, usually without the full set of features.” Over the next 12 months, however, “expect to see CRM mobile apps adding features that interact with map and note-taking apps.” Also, “CRM will become less hierarchical and easier to use on the go.”

Sales reps will rely on “mobile CRM [to] keep connected and in touch with prospects and their sales manager,” adds Sean Alpert, senior director, Product Marketing, Sales Cloud, Salesforce. “Real-time data [will] keep reps in the know about everything from usage rates to open service tickets to breaking news about the prospect they’re about to visit. And, mobile CRM [will become a] powerful sales tool as more and more reps eschew traditional slides in favor of showing a demo on their phone or pulling up the latest analytics or dashboards on their [mobile] device.”

Integration will be the name of the game. “It’s increasingly important that your CRM be able to seamlessly integrate with your ecommerce platform, your marketing automation software, your analytics software, your accounting system… the list goes on and on,” says Katie Hollar, CRM expert at Capterra, an online tool for businesses to find the right software. “Rather than spending hours downloading and uploading CSVs of data from one system to another, CRM users will demand that their provider build these native integrations with other platforms to make them more efficient. And if CRM vendors can’t keep up with the demand, users will switch systems, finding one that works better with their existing infrastructure.”

“CRMs will evolve from sales-oriented tools to truly integrated marketing and sales platforms,” predicts Kathleen Booth, CEO, Quintain Marketing. “There has already been some movement in this direction, with many CRMs, such as Salesforce, offering integrations with marketing software. But in the future, integrations will be replaced by all-in-one software platforms that truly marry the needs of sales and marketing,” she says. “One example of a company that is doing this successfully right now is HubSpot, which added a free CRM to its marketing software last year. Expect more companies to enter this market in 2016.”

Vertical CRMs will give traditional CRM solutions some serious competition. “In 2016, the ‘verticalization’ of CRM solutions will be accelerated,” says Adam Honig, cofounder and CEO of Spiro, a personal sales app for salespeople. “A real estate salesperson has different needs than a medical device salesperson, and companies are increasingly realizing that they could benefit from using industry-specific CRM solutions like Veeva, Vlocity and OpenGov,” he says. “These vendors’ built-in best practices and processes provide a level of expertise that companies just don’t get with a generic CRM solution.”

As a result, “horizontal CRMs will start being replaced by industry-specific vertical CRMs that help you navigate the specific challenges of your industry,” says Anatoly Geyfman, CEO, Carevoyance. “Healthcare is a big example of this,” he says. “Veeva, a CRM for the pharma [and life sciences] industry, was in the first wave of these, but the wave is not over.” Now, as a result of an influx of industry-specific software solutions, “even Salesforce is releasing industry-specific features and brands for its CRM product.”

More CRM platforms will be equipped with predictive analytics capabilities. “In 2016, CRM systems will have analytics engines behind them that will enable the ability to provide real-time offers to customers based on predicting what they will want next or what kind of product or service they might buy next,” says Rebecca Sendel, senior director, Data Analytics and Customer Experience Management Programs, TM Forum, a global industry association for digital businesses.

“Predictive analytics combined with CRM data gives marketers and salespeople the chance to learn, at a deeper level, customers’ habits and then react to those in real time,” says Vicki Godfrey, CMO, Avention, a provider of data solutions. “This makes for more personalized interactions, which leads to increased sales, better customer relationships and reduced churn rates.”

Look for the CRM of Things. “We’ve seen the Internet of Things (IoT) make major headway this past year, and CRM will begin to reap the benefits in 2016,” says Dylan Steele, senior director, Product Marketing, App Cloud & IoT Cloud, Salesforce. “Companies today want a complete understanding of their customers, and with billions of connected devices generating 2.5 quintillion bytes of data every day, it’s more important than ever to know how this data can create an even more personalized customer interaction.”

So expect to “see smart devices linked to CRM, enabling automated business notifications, follow-ups for sales support, and billing processes that will redefine immediacy for customer service,” says Kevin Roberts, director of Platform Technology at FinancialForce.com, a cloud ERP solution provider.

By Jennifer Lonoff Schiff CIO Nov 23, 2015 6:09 AM PT

True Cloud Architecture or just Cloud Hosted – Cirrus True Cloud

Home Page, Strategy

To take a single instance of a product, host it in a data centre and connect this up to your site or wide area network (WAN) you have simply changed the location of where your infrastructure sits. Yes a Telecity data centre in London will be far more secure than your own server room but the operating principles and single points of failure remain.
For most services that are not business critical or what are called “high availability” services, this works and is enough. For services that would have a serious business impact were they to go down for an afternoon as we saw yesterday – you need to look at True Cloud.

True Cloud is where you have multiple instances of your product in different locations, and services can be consumed from all the locations in real time. The key here is “real time”. It must be a live network, not a fail over plan of moving from one data centre to the other when something like yesterday occurs. This can take hours, plus the process of moving back at some point.

When looking at telephony, most would argue that standard office users whilst negatively impacted by an afternoon outage – could be managed via mobile devices and email communication. However, the contact centre is 100% business critical. To not serve or sell to your customers for an afternoon can have astronomical consequences.

Here is what a True Cloud network looks like:

  • 3 Data centres, located in different geographical locations. Manchester, Birmingham, London.
  • Any Cirrus end point (Cirrus desktop client (vDesk), mobile or landline DDI) can consume calls from all 3 data centres in real time.
  • All customers have an exact replication of their service on ALL 3 data centres. If a data centre goes down, your latest service settings are operated seamlessly from the remaining two data centres.
  • When breaking out the Cirrus network, we connect over 11 internet service providers (ISP’s) meaning your contact centre is not reliant on one single network to be operational. Public internet issues as seen yesterday can be managed and routed around in real time – not possible on a point to point SIP trunk set up over 1 network.
  • Cirrus talks to every end point in real time to check its connection, dynamically routing around issues and congestion to deliver our quality of service (QoS) guarantee.

No matter how large, how resilient or how many times the contract states “no single point of failure”. When paying a subscription cost model (effectively renting services), you should get access to a network, infrastructure & resilience that wouldn’t make financial sense to build yourself. With hosted contact centre you don’t, with Cirrus True Cloud you do. It really is that simple.

Retail Big Data

Uncategorized
Bernard Marr

Best-Selling Author, Keynote Speaker and Leading Business and Data Expert

Big Data and Shopping: How Analytics is Changing Retail

Drones zooming through the skies to deliver us packages that we haven’t even ordered yet – a (somewhat clichéd, already) vision of how technology, Big Data and analytics will impact the retail landscape in the near future.

But flights of fantasy aside, the way we buy and sell is evolving rapidly. Both online and offline, retailers which are embracing a data-first strategy towards understanding their customers, matching them to products and parting them from their cash are reaping dividends.

Although we are not quite (yet) at the stage where drone delivery and mind-reading predictive dispatch are mainstream, things have moved on greatly from early Big Data retail experiments, such as Target’s infamous attempts to work out who was pregnant. Today, retailers are constantly finding innovative ways to draw insights from the ever-increasing amount of structured and unstructured information available about their customers’ behavior.

I have done a lot of work with leading retailers over the past 12 months and thought it would be a good to take a look at some of the cutting edge applications of analytics in the world of shopping – offline as well as online. Major bricks ‘n’ mortar chains have fought hard to keep up with, and in some ways better, the advances in technology driven by the online retail boom. And many have found that their model offers specific opportunities to monitor and understand customer behavior which their online competitors just can’t match.

Big Data analytics is now being applied at every stage of the retail process – working out what the popular products will be by predicting trends, forecasting where the demand will be for those products, optimizing pricing for a competitive edge, identifying the customers likely to be interested in them and working out the best way to approach them, taking their money and finally working out what to sell them next.

Predicting Trends

Today, retailers have a wide range of tools available to them in order to work out what will be this season’s “must have” items, whether that be children’s toys or designer dresses. Trend forecasting algorithms comb social media posts and web browsing habits to work out what’s causing a buzz, and ad-buying data is analysed to see what marketing departments will be pushing. Brands and marketers engage in “sentiment analysis”, using sophisticated machine learning-based algorithms to determine the context when a product is discussed, and this data can be used to accurately predict what the top selling products in a category are likely to be.

Forecasting demand 

Once there’s an understanding of what products people will be buying, then retailers work on understanding where the demand will be. This involves gathering demographic data and economic indicators to build a picture of spending habits across the targeted market. Russian retailers, for example, have found that the demand for books increases exponentially as the weather gets colder. So retailers such as Ozon.ru increase the amount of book recommendations which appear in their customers’ feeds as the temperature drops in their local areas.

Optimizing pricing 

Giant retailers such as Walmart spend millions on their real time merchandising systems – in fact Walmart is currently in the process of building the “world’s largest private cloud” to track, as they happen, millions of transactions every day. Algorithms track demand, inventory levels and competitor activity and automatically respond to market changes in real time, allowing action to be taken based on insights in a matter of minutes.

Big Data also plays a part in helping to determine when prices should be dropped – known as “mark down optimization”. Prior to the age of analytics most retailers would just reduce prices at the end of a buying season for a particular product line, when demand has almost gone. However analytics has shown that a more gradual reduction in price, from the moment demand starts to sag, generally leads to increased revenues. Experiments by US retailer Stage Stores found that this approach, backed by a predictive approach to determine the rise and fall of demand for a product, beat a traditional “end of season sale” approach 90% of the time.

Identifying customers

Deciding which customers are likely to want a particular product, and the best way to go about putting it in front of them, is key here. To this end retailers rely heavily on recommendation engine technology online, and data collected through transactional records and loyalty programs off and online. Although Amazon may not yet be ready to ship products directly to our doors before we order them, it is already pushing them in the general direction. Demand is forecast for individual geographic areas based on the demographics they have on their customers in those areas. This means that when they do receive the orders they can be fulfilled more quickly and efficiently. Data on how individual customers interact and make contact with retailers is used to decide which is the best way to get their attention with a particular product or promotion – be it email, SMS or a mobile alert from an NFC transmitter when they walk or drive by a store.

Attracting the right kind of customers to your bricks ‘n’ mortar stores is key – too, as US department store giant Macy’s recently realized. Due to their analytics showing up a dearth of the vital “millennials” demographic group, it recently opened its One Below basement at its flagship New York store, offering “selfie walls” and while-you-wait customized 3D-printed smartphone cases. The idea is to attract young customers to the store who will hopefully go on to have an enduring lifetime value to the business.

Taking the money

Analytics has revealed that a great number of customer visits to online stores fail to convert at the last minute, when the customer has the item in their shopping basket but doesn’t go on to confirm the purchase. Theorizing that this was because customers often can’t find their credit or debit cards to confirm the details, Swedish e-commerce platform Klarna moves its clients (such as Vista Print Spotify, and 45,000 online stores) onto an invoicing model, where customers can pay after the product is delivered. Sophisticated fraud prevention analytics are used to make sure that the system can’t be manipulated by those with devious intent.

Pushing out the little guy?

So, with all this reliance on technology and resource-heavy analytics, is all of this just another hurdle for the little guy, in the face of competition from multinational retailing giants? Well, not necessarily. As is the case with Klarna mentioned above, a growing number of middle men are specializing in providing Big Data “as a service” infrastructure. This allows smaller businesses and independent operators to take advantage of many of the same data-driven approaches to sales and marketing, without the need for implementing expensive hardware solutions and hiring in $100k-plus per year data scientists. Targeted advertising platforms of the type pushed by Google and Facebook offer businesses of all sizes the chance to benefit from Big Data-driven segmented marketing strategies. And a growing number of startups are offering social analytics to help anyone work out where their customers are waiting for them on social media.

Retailers – large and small – have been reaping the benefits of analyzing structured data for years, but are only just starting to get to grips with unstructured data. There is undoubtedly still a great deal of untapped potential in social media, customer feedback comments, video footage, recorded telephone conversations and locational GPS data. Great benefits will come to those who put it to best work, and in my opinion the best solutions will more likely come from innovative thinking and approaches to analytics, rather than those who simply try to collect as much data as possible and then see what it does.

Cloud Applications – Out of Sight, Out of Mind?

CRM, Home Page, Strategy

Very interesting reminder….

I’m just completing a project where all the telephony for the client I am working for has moved to a cloud service provider – every part of the service from ACD, SIP Proxy, SBC and the associated reporting tools and user management all reside there.

It has been a very interesting project from both a technical and planning perspective, one of the biggest learns that I have taken away from it is the knowledge that even though a platform might sit in the cloud it is very important to consider the service that is being provided and what you, as the client needs to have in place to make it an overall success.

Of course it’s possible to have Cloud Storage as a Service or Telephony as a Service but before you look at taking on such a project have you thought of:

  • demarcation points
  • incident management ownership and accountabilities
  • operational resilience
  • how much your provider really knows your business

I would advocate a move to the cloud for many applications, however just because an application resides there it does not mean that it is ‘out of sight, out of mind’.

ITIL IT Service Management in 5 Minutes

Home Page, ITIL, Strategy

Organizations all over the world, from NASA to Disney, utilize ITIL to help improve their IT processes. But what is ITIL Service Management? Here’s what you need to know, and how you can use ITIL to benefit your own IT organization.

What Does ITIL Stand For?

ITIL is an acronym that stands for “IT Infrastructure Library”. It was originally developed in the UK as a series of books. These books explained procedures and best practices for the IT industry to follow. The goal was to standardize the management of IT, so everything wasn’t doing their own thing, but had a common set of IT standards to follow.

How Does it Work?

ITIL Service Management acts as a guideline for service delivery in the IT world. If you are committed to conducting best practices in the industry, ITIL is the way to go. As of today there are five different books, explained below.

ITIL Service Strategy

This portion of ITIL can be thought of as how an IT organization can best position itself for long-term success. Service Strategy discusses financial management and how to improve business relationships. It answers the question:

How can my IT department succeed over a long period of time?

ITIL Service Design

Designing IT systems should always involve a very important element: the user. Often times in when planning or designing a system, consideration is not made to specific intricacies of a business or its users. This section answers the question:

How can I plan my IT resources around my business?

ITIL Service Transition

When IT projects come to a completion phase, they transition to becoming an actual service that people in an organization will use. For example, when a project to migration to a new IT asset management system is complete, it is then “live” for users to begin working with. Service Transition works to answer the question:

How can I best transition an IT project over to a service for users?

ITIL Service Operation

Problems are a fact of life in IT. Without tech problems, most IT professionals would be out of a job! Service Operations is quite specific in helping provide service level agreement framework for your IT service desk. It’s where you go find the answer to:

How can my IT department meet SLAs?

ITIL Continual Service Improvement

No one wants to repeat mistakes. In IT, repeatable processes can be captured and used to improve efficiency and reduce the bottom line cost. Improvement is not always easy, however, and many IT departments need help with:

Written by Samanage

Microsoft Dynamics, the real ERP alternative to SAP

CRM, Home Page, Projects, Strategy

Last Thursday 19th of November, Microsoft gave at last more details about its new Microsoft Dynamics Ax and since them we can start giving more details about this new revolutionary ERP platform. Cloud based (but later alter mid 2016,  also On-Premise), modern HTML5 interface based on Office 365 look and feel, Power BI, Real Time Analytics with On Memory Database Technology, Office 365 & CRM Integration, Machine Learning and much more.

This new version present a huge leap in technology but maintaining the proven functionality that have helped thousand of companies along the world to optimise their processes and continue operating in a more challenging and more interconnected economic world.

The objective of this post is to focus how Microsoft Dynamics compares to SAP. The other big player on the  ERP market who is still dominant for implementations in big companies, specially in their homeland Germany where they have still about 50% market share and worldwide about 20%. That means a lot of new customers for Microsoft Dynamics when they get the opportunity to see that Microsoft Dynamics offers much more for their money.

Apart from other soft reports you would see out there, I can report from own experience. I work on the Microsoft Dynamics world since more than 8 years and I am actually based in Germany where I have contact with lots of companies that uses SAP but also I had an intensive experience with the new SAP version S/4 HANA with Fiori in the last 6 months.  Even I was in the SAP central in Walldorf to take part in an official SAP HANA course 🙂

I want to focus my comparation on several points. On the Cloud, the development experience, the BI platform, the On Memory database, the functionality, the availability of professionals and formation possibilities and last the qualitity of the IT partners.

Cloud platform

With the arrival of Satya Nadella, Microsoft took the courageous way of betting for the cloud, at that time Cloud was considered still a Hype and only companies like Salesforce and Amazon were at the forefront of innovation on the cloud. This all changed and since 2010 Microsoft is innovating on the Cloud building its Microsoft Azure with an extense geographical presence allowing customers to have their data near their operations. Even in countries like Germany it is provided a german only cloud with the colaboration of local partners like T-System.

Products like Office 365 and Microsoft Dynamics CRM are a huge success  and the pace of innovation it is quite impresive  but it doesnt mean that Microsoft presents a incoherent set of products because all of then are easily accesible from the main Microsoft Azure platform. Even casual developers can start playing developing web/mobile applications with technologies like Machine Learning and Big Data with only a single account and maybe not to expensive fees after the free account expires. This is specially important because you build a developer community that gives exponential innovation.

SAP cloud offer is based, mainly in the purchases of another companies like Ariba, Concur and Fieldglass. These are great products but it is absolutely not a homogeneous set of products giving confusion to the customer. SAP it is also building their datacenters and providing more an more their business solutions on the cloud which have damaged fees from products on premise but this year they are almost doubling the cloud business from the numbers of last year. As developer you can also start playing a little with HANA on the cloud developing web applications but it still languishes behing the developer community build by Microsoft.

Development experience

The development experience with SAP from the point of view of someone that comes from a world of modern programming languages is a horror story. With the new Microsoft Dynamics Ax you will develop your business code with X++ and some task with C#. Everything from a unique development platform,  Microsoft Visual Studio. Developing in Microsoft Visual Studio is just a pleasure because of the productivity of the tool, their options and the nice look and feel.

Instead developing business applications for the new SAP S/4 on HANA implies to learn ABAP, XSJS, Java, SQLScript HTML5, CSS and Javascript and to use  several development IDES like ABAP Workbench, ABAP Development Tools for Eclipse and SAP WEB IDE. Just learning ABAP is a nightmare and the language itself is a obsolete language that comes from the first versions of SAP. ABAP was at it early stages only a language used for reports and because of the back-compatibility it involved in a procedural language and last in a object oriented language in an artificial way. It first name was already “Allgemeiner Berichts-Aufbereitungs-Prozessor”, which means something like general report processor. There is always a rumour that SAP would like to kill ABAP but it is still there on the last version SAP S/4 because almost all the business logic is builded with ABAP. That forces the developer to learn how to develop with OData in order to build interfaces between the logic, the new database and the new HTML5 based interface. Obviously the developer have not very extra time left to think about developing business code when he has to spend too much time so much languages and interfaces.

When you are a Microsoft Dynamics Ax developer you have inmediately access to all the info you need to be productive. All the database tables and business classes are pretty well documented. Just try to google CustTrans on Google and you will get access to the Microsoft Site on MSDN where the table is descripted. Just try to make the same with SAP, to find out, which table contains the customer transactions and maybe you will spend quite a lot of time just to try to find basic info.

Last about the development of user interfaces in the last version of Microsoft Dynamics you just have to know about one technology, HTML5. And everything could be made with only drag&drop and coding on the events and on the methods for the form object on Visual Studio. There is no need to be a WEB geek in order to develop business applications on the Cloud.

Meanwhile with SAP you still have to learn to develop with the old Dynpro, maybe Web Dynpro, maybe SAP PERSONAS, and last with FIORI. Setting FIORI to start working it is also not a work for novices and require a not so easy configuration. Last, developing for FIORI is it not the easy task that SAP tries to sell you. You will have to go much more on the HTML details but also build the interface with OData between the business code and the ABAP code. That in case the business code is in ABAP, because is the business code is in HANA nobody, even most of the SAP professionals, doesnt know still where the hell would be the business code.

BI platform

Microsoft BI offert for Microsoft Dynamics implies the cool new Microsoft Power BI and the new services from SQL SERVER 2016. That´s it! Just take a look at the new Power BI and you would be impresed. On behind will be powered with an On Memory database that will deviler results in light speed.

On the other side with SAP you will find a myriad of products and offers that makes not easy to understand which one do you have to use. Some of then are more apropiate for the managers, anothers for your department leaders and finally others would be more appropiate for the shop-floor. So you will have to choose between SAP Lumira, SAP BW, SAP Crystal Reports, SAP BusinessObjects Web Intelligence, SAP BusinessObjects Explorer, SAP BusinessObjects Dashboards, ….

This is absolutely not easy but also not cheap!

On Memory database

In the field of On Memory database maybe SAP have the advantage with its HANA database. Now Microsoft with its SQL SERVER 2016 seems to be serious about On Memory and this is the main reason because Microsoft Dynamics Ax would be only available on Cloud until mid next year. Microsoft have to wait until the On Memory technology available on Azure it is available for On Premise systems. It that case I would recomend big companies with million of transactions per day to ask for a realistic perfomance test of both platforms in order to decide. If HANA copes better with such a huge amount of data it could still makes sense to spend so many money in a complicated platform like SAP. In another case, if you dont process millions of transactions per day you are trying to kill a flea with a sledgehammer.

Functionality

The new Microsoft Dynamics Ax comes with not to many changes on the functionality, that it is important because the functionality that comes from Microsoft Dynamics Ax 2012 R3 it is already proved and not too many changes are needed. That it is a little like SAP that maintain its functionality given in modules like MM, SD, FI, CO, MCM and develop industry solutions based on that.  Allmost all the functionality provided by SAP it is included in Microsoft Dynamics. I can also say that on german if that sounds more professional for you: Lieferbeleg erstellen, Materialen Kommissionieren, Warenausgang buchen, …. everything it is available in Microsoft Dynamics Ax to implement your processes in the most optimal way.

One curious thing about SAP implementations it is that your company it is supposed to work the way that SAP dictates in its implementations. Even on the analysis phases of a SAP project the users are simply presented some SAP powerpoints about the functionality and the users have to present their GAPs. That could be good in cases of companies that doesnt have clear how they have to work or optimize their processes but if you have a clear view of your company and you  want to be ahead of the competition you will need much more. An example of that company is Inditex in Spain (Zara stores). They have build their own system using no standard software because they are a step ahead of the rest and adapting to processes thought by others would make then slower. But that is a radical option that no everybody can implement. Microsoft Dynamics Ax presents a flexible solution that you can adapt and that will grown with you. Obviously making changes to the system without understanding the standard functionality is also not a very good idea.

Availability of professionals and Formation possibilities

It is report that SAP consultants in Germany could make almost 100.000 € and developers even reach the 80.000 € mark. That it is only a hint of how expensive would be your SAP project if you choose SAP. To become a SAP professional almost the only way is to take part on the expensive courses at SAP due to that the system and it myriad of options it is quite innacesible for the casual guy. Even if you work on a End User it would be not very easy for you to learn to use the system without some SAP support. There is some initiatives from SAP like OPEN SAP but most of the courses doenst offer more than just marketing of new technologies.

To become a Microsoft Dynamics Ax Developer you doesnt need too much requirements. It is enought to be a .NET developer with some SQL Skills to make the ladder. Once you have the option to play with a system at an End User or at a Microsoft Partner you can learn quite fast if you have the right colleagues with you. Microsoft provides also on its Customer Source and Partner Source plenty of formation documents and if you wish you can also attend some of the official Microsoft Courses which will speed up your skills on the products.

So I dont see any scarce of Microsoft Dynamics professionals, even SAP consultants could become Microsoft Dynamics consultants quite easily and on the way providing great ideas to their colleagues. In Germany I experience a quite incompetent recruiting process, where companies have some positions open for months of even for years because they look for someone which Microsoft Dynamics experience but also with fluent german, when in most of the cases that it is absolutely not needed.

IT Partners

Microsoft Dynamics Ax is on the market since more than a decade and there is plenty of Microsoft Partners in which you can confide, some of then are specialized specific sectors like retail or industry. Companies like my actual company Alfapeople, MODUS Consult, COSMO Consult, Avanade, HSO, Impuls and SPH are good examples of them, and in Spain Prodware, AxAzure, Iniker IFR and Quonext are examples of high productive spanish Microsoft Partners. Worldwide I can mention companies like Sunrise Technologies, HSO and K3 but also the network of Alfapeople 🙂

SAP Partners there is also a lot out there specialy here in Germany, some of them lying on offices that look like palaces that shows how much money was won with SAP in the last decades. Now it is time for them to prove if they are worth their money and for all the current SAP users to take a look at Microsoft Dynamics Ax. They would be surprised to see how much money they can save and but most important how they will be able to become more agile reacting more quick to the needs of their customers in a more interconnected, cloud oriented world.

Pedro Rodriguez Parra
Dynamics Ax Developer at AlfaPeople GmbH

Data insight becomes a key competitive weapon in 2016

Home Page, Uncategorized

Less than 30 percent of enterprise architects connect analytics to business outcomes well. Look for more firms to double down on insights in 2016.

Three of four enterprise architects strive to make their firms data driven. But well-meaning technology managers only deal with part of the problem: How to use technology to glean deeper, faster insight from more data — and more cheaply. But consider that only 29% of architects say their firms are good at connecting analytics results to business outcome. This is a huge gap! And the problem is the ‘data driven’ mentality that never fights its way out of technology and to what firms care about — outcomes.

In 2016, customer-obsessed leaders will leapfrog their competition, and we will see a shift as firms seek to grow revenue and transform customer experiences. Insight will become a key competitive weapon, as firms move beyond big data and solve problems with data driven thinking.

Shift #1 — Data and analytics energy will continue to drive incremental improvement

In 2016, the energy around data-driven investments will continue to elevate the importance of data and create incremental improvement in business performance. In 2016, Forrester predicts:

  • Chief data officers will gain power, prestige and presence…for now. But the long term viability of the role is unclear. Certain types of businesses, like digital natives, won’t benefit from appointing a CDO.
  • Machine learning will reduce the insight killer — time. Machine learning will replace manual data wrangling and data governance dirty work. The freeing up of time will accelerate data strategies.
  • Firms will try to sell their data; some will succeed, most will sputter. In 2016, an increasing number of firms will look to drive value and revenue from their data exhaust. Despite the promise, most companies will struggle to master the intricacies of protecting personal information and the appropriate business models.

Shift #2 — Data science and real-time analytics will collapse the insights time-to-market.

The trending of data science and real-time data capture and analytics will continue to close the gaps between data, insight and action. In 2016, Forrester predicts:

  • A third of firms will pursue data science through outsourcing and technology. Firms will turn to insights services, algorithm markets, and self-service advanced analytics tools, and cognitive computing capabilities to help fill data science gaps.
  • Streaming ingestion and analytics will become a must-have for digital winners. The window for turning data into action is narrowing. The next 12 months will be about distributed, open source streaming alternatives built on open source projects like Kafka and Spark.
  • Algorithm markets will start to get attention. Firms will recognize that many algorithms can be acquired rather than developed. Just add data. For example, services like Algorithmia, Algo Market, Data Xu, Precision Hawk, Alogrithms.org, Algorithms.io, Kaggle, and galleries from AzureML’s and Big ML will gain traction.

Shift #3 — Connecting insight to action will only be a little less difficult.

Closing the gap between insight and action is the big unfilled hole we found in 2015. In 2016, that gap will be hard to close for all but the most advanced leaders. However, by the end of 2016, energy around big data will be substantially redirected towards insights execution. In 2016, Forrester predicts:

  • Half of all IT-led big data hub investments will stagnate or be redirected. Business satisfaction with analytics output fell by 20% between 2014 and 2015. Next year, impatient business leaders will shut down or redirect big data investments that fail to deliver a measurable impact on winning, serving, and retaining customers.
  • Only a few elite teams will take the leap from BI to Systems of Insight. Only a few teams are taking baby steps toward agile BI, and Forrester expects that less than a third of these will be ready to take the next leap — Systems Of Insight.
  • Data brokers and insights innovators will collide in the insights services market. Technology vendors, data brokers, and marketing data management platforms (DMPs) all recognize the opportunity to sell insights, not data, as a service. But, they are rushing to meet the demand with entirely different strategies. Expect chaos in 2016.

By |

3 Comments

CRM activity Records

CRM, Home Page

Microsoft Dynamics CRM has different Activity entities, showing actions from the CRM users.  ActivityPointer records are automatically created when an Activity record is created, enabling developer to retrieve different Activity types with one request.

Activities are the actions users do in CRM, often tracking the interaction between users and customers.

CRM has lots of different activities

  • PhoneCall
  • Task
  • Letter
  • Email
  • Appointment
  • Fax
  • Custom Activities

Tracking the activity of users/customers, records the time spent on CRM records. For cases, you can see activities associated with the case.   Tracking activities allows different people to work a record with a full knowledge of the status of the case.

Activity facts

  1. Activity is an entity in CRM, you can look at it in the custom solution
  2. The activity entity is non-editable.
  3. Activity schema name is ActivityPointer e.g ActivityPointer = Activity
  4. ActivityPointer is not the same as Activity Party

Microsoft ActivityPointer description

“Task performed, or to be performed, by a user. An activity is any action for which an entry can be made on a calendar.”

What’s the purpose of the Activity Entity?

If you can’t explain it simply, you don’t understand it well enough.

Albert Einstein

To understand a CRM functionality, you need to understand its purpose.   Once you understand what it’s used for, how it works and why it works in that way.

The purpose of the Activity entity is in the schema name – ActivityPointer.  Developers can retrieve all activities with one request, instead of multiple retrieves using the different activity types.

The activity views allow you to show all activities for a record, despite the activities being different types (e.g. Email, PhoneCall, Task)

The CRM SDK has a great page on Activity entities.

The diagram shows why understanding Activities is important, Activities are linked to the primary entities in CRM.  Activity entity acts like an interface between primary entities and individual activities.

ActivityId’s are the same because an Activity record is created by CRM when an activity record (Task/PhoneCall/Email/etc) is created.

Activity Types

Doing an advanced find, you can use the Activity Type field on the ActivityPointer entity to see what Activites types can be ActivityPointer records.  The Activity Type is a Global option set

activity type

 

from the CRMHosk Blog

No Comments

Microsoft to open UK datacentre

CRM, Home Page, O365, SharePoint, Strategy

The new UK-based datacentre is said to be opening from late 2016, though it sounds like the MoD will begin using it sooner than that.

O365 and Azuze UK data centre coming 2016

At his keynote speech at the Future Decoded event in London yesterday, CEO Satya Nadella stated that  customers in the UK would at last be able to store data within the country, allaying fears (even I not actual legal impediments) around governance and data protection.

In addition to Microsoft Azure and Office 365, the UK datacentre will support Microsoft Dynamics CRM Online sometime afterwards. Microsoft will also offer Azure ExpressRoute to provide customers with the option of a private connection to the cloud.
“At Microsoft, our mission is to empower every person and organisation on the planet to achieve more,” says Nadella. “By expanding our datacentre regions in the UK, Netherlands and Ireland we aim to give local businesses and organisations of all sizes the transformative technology they need to seize new global growth.”

He added that the new local Microsoft cloud regions will enable data residency for customers in the UK, allowing data to be replicated within the UK for backup and recovery, reduced network distance and lower latency.

Nov 11, 2015

No Comments

How to Make Microsoft Office Desktop Software Work Seamlessly With Google Drive

Google Apps for work, Home Page

drive-devices


Last December, we shared how Google was making its productivity suite, Google Apps for Work, work “friendlier” with Microsoft Office files. Earlier this week, Google announced what seems a surprising next step in its Microsoft Office embrace: a new plugin for Google Chrome (Windows version only) that enables Google Drive to work seamlessly with Microsoft Office software. Using the new Google Drive Chrome plug-in, people using Office for Windows are able to open their Word, Excel and PowerPoint documents stored in Drive, then save any changes back to Drive once they are done. The move supports Google’s focus on making Google Drive more competitive to Cloud storage leaders like DropBox at the price of weakening its head-to-head competition with Microsoft Office.


How it works

If you’re working on a Word document, Excel spreadsheet or Power Point Presentation that’s on your computer, you can also save that file to Google Drive, directly from the Office apps. This is especially useful for sharing files with teams, or for accessing your files across devices running Windows.

Yet another Cloud option

Box and Dropbox have taken similar approaches to support Microsoft’s commanding lead in office productivity software. Box lets you create and edit Office Online files, while Dropbox has brought collaboration features right into Microsoft Office for Windows and Mac. And, of course, there is Microsoft’s own OneDrive.

No Comments

Data Loader Service: How to Use (Part 2 of 2)

CRM, Home Page

Data Loader Service: How to Use (Part 2 of 2)

  • Comments 1

This blog post is the second in a two-part post about the Data Loader Service.  The first post can be found here.  

This post details a quick walk-through of how to use the service and various features.

Configuring Data Loader Service

There are 2 steps that need to be completed before this service can be used.

Step 1: Deploy the Data Loader runtime for a specific CRM organization

Data Loader for every CRM organization needs to deploy a new runtime module to ensure data isolation across organizations and to stay closer to the data center of the specified CRM organization.

1. Click on “Deploy runtimes” tile

2.  Click on the “+” button and fill in the data in the pop up screen. Select the CRM organization that you would like to import data to and click “Deploy“ button on the page.


3.  Deploy will take approximately between 15-30 minutes.

4.  Your deploy is ready when the grid reflects “Running” status as below

Step 2. Configure the flat file format.

1.  On the main dashboard click on “Configure file format” tile.

2.  In the “Configure file format” page, click on “+” and enter the necessary information pertaining the flat file format. Here is the example of the standard CSV format.

3.  Click save.

Now you are ready to start importing data into CRM organization.

Importing data

1.  Click on the “Import” tile on the dashboard

2.  This will start a wizard, follow the steps. Here you can upload data for more multiple entities as needed.

3.  On the “3. Map fields” step, the service will do the best effort matching, of the source file columns to the target entity. For the unmapped fields you could choose to complete that mapping or ignore.

Another key point to note is, the drop down for target fields also displays any alternate keys that were defined on the lookup entities. So you can map the alternate keys for your lookup columns.

4.  At the end of the wizard, give a name to the project and click on “Start data job”. This will start processing the uploaded files and import them to the cloud staging environment.

This is not starting the “Import to CRM” yet.

Viewing the Job execution details

1.  Click on the specific job’s card view on the dashboard.

2.  This will open the Job Details page, which is divided into 3 tabs:

  • Source tab.  This will display the success or error encountered while processing the files into staging table. If there are any records which the service was not able to import into staging, it will display it as an error with details.
  • Staging tab.  This is one of the important part of this service. This reflects all the records that were successfully imported in the cloud staging, in CRM, in errored etc. In this view user can run data quality validations and fix errors.
  • “Import to CRM” tab.  This tab displays the progress status after import to CRM has started.

Run data quality services

1.  On the Jobs details page, click on “Staging” tab

2.  To run DQS, click on “Validate” as shown below. This will validate two things, metadata validation and look up validation. Depending upon the number of records this might take a few minutes, and you could start validate on all the entities parallel if needed.

3.  At the end of validate, it will change certain records into “Not Valid” status – see the pic below.

Fix errors in Excel

1.  User can fix the errors in Excel, by clicking on “Download to excel” button. This button is in the staging tab as shown below.

2.  This will open excel. Click “Sign in” and enter the CRM credentials. Once authenticated, this will load all the errored records.

3.  Review the error message and fix the records. After the records are fixed, click on “Publish”. This will publish the fixed records back to staging. You would need to refresh the data in the staging grid to see the changes.

Start Import to CRM

After the data has been validated and fixed in staging, you are ready to start the import to CRM action. This will basically start import for all the entities in the current data job.

1.  Click on “Import to CRM” – as seen below.

2.  Once the import to CRM has started, the progress of the import can be viewed in “Import to CRM” tab

Any records that did not make it in CRM will remain in the staging grid in “Errored” status with detail error message that was received from CRM web service. The user can review the error messages and fix the records again in Excel and retry import. This service enables iterative import of data until all the records that are required are imported.

Refresh CRM Metadata in Data Loader

Once the Data Loader runtime is deployed, and thereafter if there are metadata changes or customization added on CRM side, then you would need to follow the steps below to refresh the metadata in Data Loader.

1.  Click on “Deployed runtimes” on the main dashboard

2.  Select the CRM organization in the grid and click on the refresh icon over the grid.

This will start refreshing the metadata. It might take several minutes before the refreshed metadata is reflected.

Cheers,

CRM Product Team

No Comments

Data Loader Service: Preview Feature for Microsoft Dynamics CRM Online (Part 1 of 2)

CRM, Home Page, Uncategorized

Data Loader Service: Preview Feature for Microsoft Dynamics CRM Online (Part 1 of 2)

  • Comments 11

Customers are facing challenges with data migration scenario, where each customer/partner has to write custom code or use 3rd party tools to import large volume of data. To help alleviate this pain point, Microsoft Dynamics team has developed a cloud Data Loader service for Dynamics CRM Online. The key benefit of this service is the ability to move your data between flat files and CRM Online, and cut down on implementation costs. We’re pleased to announce preview of this service that will empower organizations to better manage their data import/export processes. The preview supports only import operation, and the export will come in the next update of the feature.

This new Data Loader is available as a preview to North American based Dynamics CRM Online organizations. It supports CRM 2015 Online Update1 and the upcoming release.

Important

Microsoft doesn’t provide support for this preview feature. A preview feature is a feature that is not complete, but is made available before it’s officially in a release so customers can get early access and provide feedback. Microsoft Dynamics CRM Technical support won’t be able to help you with issues or questions. Preview features aren’t meant for production use and are subject to a separate supplemental terms of use for preview features.

In this post, I will be sharing the following high level topics for the Data Loader.

Key Benefits

Getting Started

Preview features

Known Limitations

Send us feedback

Key Benefits

  • Quick and easy to configure import of data
  • Eliminate writing custom code against CRM SDK for importing data and thus cutting down on the implementation time and cost
  • Supports bulk loading of data
  • It’s available at no cost

Getting started

The Data Loader preview is enabled by default for all CRM administrators. Use the following steps to access the service:

  1. Navigate to link https://lcs.dynamics.com/DataLoader/Index.
  2. Click on Sign in and enter your CRM administrator credentials.

NOTE: This CRM admin also needs to be a Service or Global admin in AAD (Azure active directory).

This will log you in the service, and you are ready to use.

For detailed walk through of how to use this service – refer to the blog post Data Loader Service: How to Use (Part 2 of 2).

Preview features

  • All data uploaded is encrypted
  • Support for update and creates
  • Support for flat files with any delimiter
  • Edit and re-use data mappings
  • Excel app for fixing invalid data in the staging db and iterate over the data
  • Parallel processing to support bulk loads
  • Import of multiple entities in one data project
  • Handles auto detection of insert order and relationships
  • Imports historical data like closed activities, older Created date
  • Achieve high throughput

Preview limitations

  • Only supported for North America geography
  • No support for email attachments
  • No Web API support for accessing the tool
  • Limited to flats files as source data format
  • Scheduling Jobs not supported
  • Dynamics CRM On-Premises is not supported

Send us your feedback

We are making this preview available so that you can try it and let us know what you think. Your feedback will help us prioritize work to include the capabilities you need most. We ask that you give us your questions, suggestions and report problems from right inside of the Data Loader user experience, it’s the Smiley feature as shown below.

Cheers,

CRM Product Team

No Comments

Why companies are switching from Google Apps to Office 365

Google Apps for work, Home Page, O365

Microsoft’s increasingly strong Office 365 performance is coming partly at the expense of Google Apps. Motorola’s recent decision to move from an elderly version of Office to Google’s cloud service bucks the more common trend of companies who have been using Google Apps switching to Office 365.

It’s not just Microsoft saying that Office 365 is growing (COO Kevin Turner claims that four out of five Fortune 500 companies use the service). Last year, cloud security company Bitglass said traffic analysis gave Google twice the market share of Office 365 among its customers, with 16.3 percent of the market; that went up to 22.8 percent this year as more companies switched to cloud services. However, over the same year, Office 365 grew far faster, from 7.7 percent to 25.2 percent. Google has a slight advantage with small businesses (22.8 percent to Microsoft’s 21.4 percent) but in large, regulated businesses (over 1,000 employees), Microsoft’s 30 percent share is twice that of Google and growing fast.

Office 365 is even more popular with the 21 million customers of Skyhigh Network’s cloud security services, where 87.3 percent are using Office 365 services, with each organization uploading an average 1.37 terabytes of data to the service each month.



Okta

There are some geographical differences in the popularity of Office 365 and Google Apps in Okta’s customer base, with APAC currently a Google stronghold.

That fits what identity management company Okta is seeing. Office 365 is the most commonly deployed application among its customers (beating even Salesforce) and adoption is growing faster than any other cloud applications. It’s also the cloud service customers use the most, probably because that usage includes all the email users send and receive.

Okta CEO Todd McKinnon does note that the picture is a little different in different parts of the world and across different industries. Google Apps is stronger in APAC, although that may change as Microsoft builds out new data centers in the region (that’s already making a difference in Australia and Japan). The only industry segments where Google Apps has more share than Office 365 are in technology; media, Internet and software companies. The smaller the company, the more share Google Apps has among Okta’s customers; but even in the smallest companies Office 365 is still in the lead.

“There are different dynamics that matter based on the company size,” McKinnon points out. “Large companies need manageability, security, reliability. You wouldn’t see this acceleration of Office 365 in large companies without Microsoft doing a lot of work [in those areas].”



Okta

Google Apps is more popular with smaller businesses in Okta’s figures.

[Related: How Office 365 balances IT control with user satisfaction]

The majority of new Office 365 customers are moving from on-premises, but even companies that have already adopted Google Apps for Business are switching to Office. Microsoft claimed they won back 440 customers in 2013, including big names like Burger King and Campbell’s, and the trend is continuing. Some of that may be the halo effect of the Office 365 growth making companies that picked Google Apps question whether they made the right decision. But often, it’s because of dissatisfaction with Google Apps itself.

The simplicity of Gmail and Google Docs clearly appeals to some users, but as one of the most widely used applications in the world, the Office software is familiar to many. “When you put these products into companies, the user interface really matters,” McKinnon says. “For email, the user interface really matters. Google Apps is dramatically different from Office and that’s pretty jarring for people who’ve been using Outlook for a long time. It’s like it beamed in from outer space; you have to use a browser, the way it does conversations and threading with labels versus folders, it’s pretty jarring.”



Okta

Cloud security identity and security services find that Office 365 is gaining popularity with their customers; this shows the growth in Office 365 adoption among Okta’s users.

And it’s hard to use Outlook with Google, many customers report. “Some companies, they go to Google and they think they are going to make it work with Outlook; what they find out when they start using the calendar is that it just doesn’t work as well with the Google Apps backend as it does when you’re using Office 365. The user interface is so important that it pulls them back in. Even if you like the Google backend better, you have thousands of users saying ‘what happened to my folders?’”

Buying Office 365 for Office

That’s what Glenn Jimerson, currently CTO of fintech startup Loanatik found with an earlier startup. “I’ve deployed Google Apps in three different startup and I personally like it for many reasons, including the price; it’s great bang for the buck.” But while young founders and employees, especially Mac users, were happy with Google Apps for the basic document tasks they were doing, other, older workers found they weren’t as productive without Office. “I got a lot of backlash; they weren’t happy that it wasn’t Outlook. They were saying ‘I really want PowerPoint to do my presentations.’”

The tipping point was a new CEO who insisted on working in Outlook. When Jimerson looked at the options, Office 365 made more financial sense than just buying the Office software. “We would pay Google Apps $5 a month and then we’d have to buy the Office suite for each computer. If you’re pushing somebody who’s used to an Office environment into a Google cloud, they’re going to feel this vacuum because they no longer have the programs they’re familiar with. It represents a huge investment in time that people aren’t going to be receptive to. And you have Microsoft saying ‘for just $3 a month more you could have all these great programs you’re used to. Now they’ve got the pricing so you get more than you get on Google, what Microsoft is offering is fantastic, and for $3 more it’s a premium worth paying. Microsoft is still the king of hill for a reason.”

The cloud aspect of Google Apps hadn’t proved important to the startup (and it wasn’t why they switched to Office 365). “Everybody was fine with the idea of the cloud but it wasn’t the primary reason; the cloud was nice to have but they didn’t necessarily see it as a productivity boost.” In fact, more employees were concerned about working offline. “What happens if there’s no Internet, if I’m in a plane with no Wi-Fi, can I still work? Their first reaction is ‘I want Office for that’.”

His current company has used Office 365 from the start (“I brought up Google apps but nobody was willing to be that cheap about $3 a user,” he notes) and OneDrive is one of the most popular features “People like it; it’s taken over from sneakernet and emailing back and forth. If they need to work together, people just toss it up on OneDrive”.

Outlook and Excel features come up again and again as advantages for the companies who had made the move away from Google Apps. Erik Jewett of Skykick, who provides a service partners use to migrate customers to Office 365, hears that particularly from power users. “In Excel, there are rich capabilities that aren’t matched by Google apps.” In Outlook, calendar sharing is important, as is delegation. “Administrative assistants can manage their manager’s calendar; they don’t have that type of delegation with Google apps.”

Nick Espinosa, the CIO at IT consultancy BSSSi2, has helped several businesses move from Google Apps to Office 365. “Quite frankly, Google is completely outclassed by Office 365 in this arena and despite the price difference corporations who made the switch to Google Apps to save money usually end up coming back within a year. The primary driver of this appears to be Outlook integration over everything else, followed by the inability to do some advanced things that Microsoft Office excels at.”

[Related: Google for Work vs. Microsoft Office 365: A comparison of cloud tools]

For larger companies, this goes beyond the familiarity of Outlook into advanced features. “You can integrate Skype into Outlook, you can integrate OneDrive for Business into Outlook. It becomes essentially like a command center, and there is nothing Google gives you that does that.”

“The reason people have been moving to Google is cost,” Espinosa says. “Most companies we’ve seen that have decided to move to Google, it was primarily for cost savings. The say ‘we get email, we have all these things and it’s significantly less expensive than having to buy a copy of Office for everyone and hook up a mail server. But a lot of people don’t find the usability and collaboration nearly as effective as Office 365.”

Enterprise advantage

Not all companies who switch to Office 365 are using it as a cheap licencing deal for the Office applications. They also value Microsoft’s enterprise know-how.

“As a CIO, the goal is to run a balance between keeping all the employees happy and keeping the IT staff from pulling out their hair trying to centrally administer everything,” Espinosa says. “Most IT staff are very familiar with Microsoft infrastructure already. The Office 365 platform is essentially built on Active Directory (AD) and that’s integrated into most networks. Anyone that has had an Exchange server knows how to create routing, groups, calendars, collaboration…”

For many customers, Office 365 also copes better with the scale and complexity of a multinational enterprise than Google Apps. The global scale of Office 365 is an advantage to customers in government, education and regulated businesses care about where their data is and who can access it; Dr Mary Davis, the CIO of Macquarie University in Australia explains the reason for their recent switch from Google Apps to Office 365 “following a decision made by Google to move our stored data from Europe to the United States.” Microsoft’s data centers in Victoria and New South Wales fit their security and privacy concerns better, Davis says, and they’re getting faster access because the services are closer to them. She also notes that the majority of other Australian universities use Office 365 or Exchange and “many plan to ultimately move to Office 365,” which makes collaboration easier.

Google Apps didn’t cope well with scale at one large business Espinosa helped to migrate to Office 365, where they had been using Google Hangouts for online meetings. “Someone created a hangout for their meeting and they were hosting the meeting, and then another person tried to create a hangout with the same name – and they ended up being merged into the meeting. That doesn’t happen in Skype for Business.”

In that case, the mix-up was only confusing, but if confidential information was being discussed, it could have caused serious problems. “You should be able to create containers that are properly structured and secured,” says Espinosa, putting the difference down to Microsoft’s years of experience with enterprise systems. “There’s just a lot of detail in Office 365 that Google is just learning.”

Okta’s McKinnon says that goes beyond features to the whole way Google deals with businesses. “When they built Google Apps it was for consumers; the email had advertising in it. To be successful in enterprise takes a very different culture. You have to market it differently, you have to have a sales distribution organization, a support organization, different legal contracts for customers that you’re able to customize. It’s not that Google’s not capable of doing that, but it’s a different culture.”

Google’s approach to support can be frustrating, agrees Jewett. “Microsoft has been able to provide higher level of support, certainly for enterprise customers who are able to pay for dedicated customer account managers, and we hear that as a top reason to switch from customers.”

“The cut-off is if you’re if under 1,500 users they won’t talk to you,” Espinosa complains. “Google should have a paid support line. We can get Microsoft 24 hours a day; in an emergency, they will get back to us in an hour. In an emergency, they’re there with us from midnight to 3 a.m., if we need them.”

The Google dead end

Reaching partners like Espinosa that many businesses turn to for IT help is critical, especially for small and medium businesses. “That’s an area where Google has been cutting back on partners,” says Jewett. “I definitely hear partners saying they used to sell Google and Microsoft has done a very effective job of flipping them from being large Google resellers to large Microsoft resellers. “

The success of Office 365 is even attracting partners who have previously specialized in Google Apps. Maarten van Dijk, owner of Dutch consultancy Digitalent, moved his company from Google Apps to Office 365 this summer, partly because of the number of consulting requests and job opportunities they were getting from customers that involved Office 365. But as an early adopter of Google apps – van Dijk had been using the service for ten years – he was also disappointed with the lack of new features. “It just didn’t improve much in the last few years; I felt their development was on a dead end.”

The 1TB of storage in Office 365 was appealing. The storage in Google Apps was much smaller and the company found buying more was unnecessarily complicated. And the migration has made van Dijk interested in other Microsoft cloud services that work with Office 365; he’s also considering moving their on premise virtual machines to Azure and investigating syncing their Active Directory with Azure AD.

Espinosa sees that hybrid option as a definite advantage for Microsoft. “You can add Office 365 into your local solution. You can have AD, security, everything on premise and move elements like email to Office 365.” Google offers some AD integration, he notes; “you can filter and block across a domain, you can even push Windows group policy to Chrome. But Microsoft absolutely has the advantage for running AD and replicating that into the cloud.”

Van Dijk isn’t the only customer switching away from Google Apps because of the lack of development. Google showed early promise but they didn’t invest while Microsoft improved and that’s disappointed the early adopters, suggests McKinnon. “When we started seven years ago, Google Apps was pretty nascent but it was pretty good. I would have predicted that Google would have run away with email and collaboration, but over the last two or three years, Microsoft has essentially caught up and passed Google Apps.”

Skyick’s Jewett hears the same thing from customers. “Google started off as the leader; they were the first to have completely web-based productivity tools. It was a very effective way for Google to get the perception that they were being more innovative. And many people made a strong bet on Google having a strong future plan.”

That spurred Microsoft to catch up, and Google hasn’t kept up, says Jewett. “Microsoft started from behind but they made the large investments [required]. It’s more than just vaporware; they have built out greater capabilities where Google has been standing still. Microsoft has gone from behind to being the leader. They have a roadmap of new features and products continuing to come out in productivity.”

“It was early adopters who moved to Google; when they made that decision Google was the clear leader and now they see Google hasn’t invested to build on the expectation that was set. Given the sophistication of Google as a company, we’ve found it surprising that they haven’t built out more enterprise capabilities around Google Apps – and customers are noticing.”

Jewett notes that even a year ago Skykick had frequent requests to provide a migration service to Google Apps; “we don’t really hear that any more.”

Email, file sharing and unified communications may be enough of a commodity to move to the cloud (rather than keeping in-house infrastructure and expertise), but businesses don’t see them as legacy systems that don’t need to improve. They’re looking for innovation in these areas, and they’re betting on Microsoft rather than Google to deliver that.

“What Microsoft has over its competitors is a comprehensive understanding of what matters to business,” says Espinosa. “Microsoft is much better positioned than Google to be the dominant force in providing cloud for business, and it has overtaken Google because businesses have realized they should never switched from Microsoft in the first place.”

 

This article was written by Mary Branscombe from CIO and was legally licensed through the NewsCred publisher network.