Thursday 19 December 2013

Results Analysis - A Detailed Overview



The main point of KnowledgeSmart is to provide AEC firms with detailed skills gap and training needs analysis data.  So let's take a proper look at the process.

To begin, we will assume that you have already gone through the process of adding users, inviting users, assessing your teams and you now have a healthy selection of results in your KS dashboard.

Don't forget, you control the amount of feedback that users get to see at the end of their test session, by selecting your preferred option on the Settings > Test Feedback page.


And you can also choose to display (or hide) the 'Skip Question' (I don't know the answer) button, the 'Request Training' (I'm not sure about my answer) button and the test logout button, via the Settings > Test UI Options page. This will have an impact on the types of results data you are able to capture and analyse later on.



Last, you can choose to display (or hide) the user data capture pages, which appear at the start and end of a test session. We will be adding some new search values shortly, which allow you to create reports on user profiles and demographic data.



You can monitor the status of your invites (Not started, In Progress, or Completed) on the Invites > History page.


For the purposes of this exercise, we're using dummy results data, to protect the (incompetent?) innocent!

When your teams have completed their assessments, you can view your KS results in the Results > Data area of your dashboard.


Use the 'Results per page' drop-down to change the number of results displayed on screen.


Use the account drop-down menu to view results data across linked accounts.


Click on the score link to view a detailed summary of each test report. The report opens up in a new window.


You can display links to in-house or third party learning resources in the test report, which builds a bridge between the testing and training parts of your learning journey.



Creating Groups

You have a range of searching & grouping tools available, for filtering and analysing your KS results in detail.

Step one
Select the 'Show search' link in the orange bar, at the top of the Results page.


There are a variety of different search parameters. You'll also see the five user datafields on the right hand side of the search box (you can add your own custom labels on the Settings > User Datafields page). Use these values to add more depth to your results searching & filtering options.


Step two
Enter the information in the relevant search field and hit 'Search' to filter your results data. Use the check boxes to select your results records for the new group.


Selecting the 'Training requests' filter will display a list of all users who have selected the 'Request training' button during their test session.


Click on each report to see which questions have been flagged.


Step three
Select the 'Show groups' link in the orange bar.


Step four
Use the 'Create New Group' tool to create a new sub-group of your results, based on the output of your search.  Once you have selected your results, type in the name of the new group and hit 'Create'.



Step five
Use the 'View group' drop-down menu to navigate between sub-groups of your results data.



Dynamic Groups

Use the dynamic grouping tool to create new groups which will automatically update, when new test results are added to your dashboard.

Step one
Select the 'Show dynamic groups' link in the orange bar.


Step two
Use the 'New Group' tool to create a new dynamic grouping of your results.  Enter the group name in the field provided.


Step three
Use the 5 datafield drop-downs to create the rules you require, for adding existing and future results to your group.  Save your new dynamic group, by selecting Save Group.
For example, a results group has been created based on datafield 1, City: London.  The next time a new result is added to the database and the associated user record includes datafield 1, City: London, then the results record will automatically be added to the existing group.


Step four
Use the 'View dynamic group' drop-down menu to navigate between sub-groups of your results data.



Exporting Results

You have a variety of options for exporting your KS results data.

Select the 'Export results to csv' button to generate a spreadsheet of overall results data, including a personal curriculum of training topics for each user.



Select the 'Export training info to csv' button to generate a spreadsheet of training workshops, together with a corresponding list of users who have been identified as potentially needing to attend each workshop.



Select the 'Export training requests to csv' button to generate a spreadsheet of users who have flagged questions for further training. The report lists the highlighted questions and corresponding training tags.



Select the 'Export skipped questions to csv' button to generate a spreadsheet of users who hit the 'Skip question' button during their test session. The report lists the skipped questions and corresponding training tags.




Charting & Reporting KS Results

There are a range of charting and management reporting options available in your KS dashboard.

Performance Scatter

This chart displays group results in performance quartiles.  The upper left quadrant (Q1) contains results, where the users completed their assessment accurately and fast.
The upper right quadrant (Q2) shows results which have high accuracy, but slower completion times.  Bottom left (Q3) represents lower scores, but in a fast time.
Lastly, the bottom right quartile (Q4) shows test scores which are inaccurate and slow.



Training Requirements

This chart highlights training topics for a given group of test results. The logic analyzes all of the results for a group, references the training tags assigned to questions presented during a test and lists those tags in priority order.
Red indicates the tasks which have been answered incorrectly by most people in a given group. Orange is the next highest priority, followed by Yellow; green training tags are the topics which have been answered correctly by most of the group, so represent the least urgent issues.
For example; 10 people are presented with one or more questions, which include the tag, ‘Rooms’. If 7 of the 10 people answer one or more ‘Rooms’ questions incorrectly, then the keyword, ‘Rooms’, will be flagged at 70% and appear in Red on the chart.


You can also view charts for Training Requests and Skipped Questions on this page.




Question Performance

This chart looks at how each individual question in your library has performed, in any given test.
The logic analyses all of the results for each test and presents an aggregate percentage score for each question, divided by the total number of results for that test on the account.
For example; 10 people answer a question called ‘Doors’. If 7 of the 10 people answer the question 100% correctly, 1 person scores 50% and the other 2 score 0%, then the question will score an aggregate of 75% and appear in Yellow on the chart.


You can also create a chart for the average time taken to answer each question, by changing the radio button at the foot of the chart.



You can quickly view the relevant question data by clicking on each question name in the 'Question List' box, on the right hand side of this page.


Select the 'Export question performance info to csv' button to generate a spreadsheet which presents a breakdown of per question time & score values for each user.



Select the 'Display question answer info' button to view a table of 'required' vs 'submitted' answer values for each user. This is an easy way to identify an common mistakes or errors across multiple results.



We will shortly be adding a new page called 'Module Performance', which displays a per module breakdown of KS results data.


Group Scores

This chart displays user performance for any given group, in descending order. The X-axis shows the % score attained and the Y-axis displays user names.



Group Comparisons

This chart allows firms to compare performance, from one group to another, across (up to) 9 sets of data at a time.  Group vs group comparisons can be used to compare a range of results data.
For example; pre and post-training performance, different project teams, offices from different geographic locations, data representing different job titles or industry disciplines, in-house data vs interview candidates, and so on.



Global Comparisons

This chart allows firms to compare in-house performance against a wider set of anonymous industry aggregate results data, for selected tests.


We will be adding new searching and reporting options in a future release, so you can analyse the user background info and demographics in greater detail. Regional benchmark stats and user profiles are a key theme for the global comparisons section.

So, there you have it, a comprehensive strategy for capturing, analysing and reporting on skills gap and training needs data for your organisation.

R

Monday 16 December 2013

AUGI Top DAUG 2013 - The Results



This year, KnowledgeSmart and AUGI once again teamed up to provide an interactive skills assessment, across 9 popular Autodesk software tracks: 3ds Max, AutoCAD 2D, AutoCAD Civil 3D, AutoCAD Plant 3D, Inventor, Navisworks, Revit Architecture, Revit MEP and Revit Structure. And this year, we also included the general topic of BIM Skills, to make 10 tracks overall.

Here's a brief history of the TD contest, from AUGI.  And here's a summary of last year's competition.

Once again, we had some nice prizes up for grabs.  Track winners walked away with a $100 USD Amazon voucher (a combination of high scores and fastest times determined the winner of each track), a glass trophy and the overall winner won a free pass to AU2014.


We spent several hours on Monday, setting up 20 networked PC's in the exhibition hall, at the AUGI stand.


 Next, 180 copies of Autodesk software needed checking, then all the KS sample data sets had to be uploaded to each machine and checked again.  Big thanks to Tony Brown for accomplishing the lion's share this mammoth task. (Especially as, when we were half-way through, we realised that our thumb drive was broken and had basically corrupted all the data sets. So we had to start over!).

Anyway, patience is a virtue, as the saying goes, so here's how we looked when we were all nicely set up on Tuesday (with a whole 90 mins to spare before the competition opened!):


The main competition ran over 2 days (1 x 3 hour slot on day one, then 2 x 3 hour slots on day two). Contestants had to answer 10 questions, using the 2014 version of each software title. Each session was limited to just 12 minutes, so people had to work fast! Special thanks to awesome AUGI team members, Kristin, Bob, Michael, Donnia and Richard, for briefing users and making sure everything ran smoothly.

Here are the Top DAUG's in action:


By the end of the contest, we posted 356 results, with 0% failed tests. We should also mention that the web connection provided by the Venetian was super-fast! We measured 150 MB per sec on day one.

Throughout the competition, we posted a rolling list of the top 10 contestants for each track, on the AUGI big screen.

The Results

Congratulations to the following contestants, who won their respective tracks:


And a special mention to the overall winner of AUGI Top DAUG 2013:

Ben Rand

Analysis

So, let's take a detailed look at the results of this year's Top DAUG competition.

Overall

No. of Tests Completed:  356
Overall Average:  48% in 10 mins 49 secs
(NB the average score for 2012 was 41%).



Track 1 - 3ds Max

Track Winner: Ted Moore
Winning Score: 60% in 12 mins 0 secs

Top 10 Contestants:


No. Completed: 14
Group Average: 36% in 10 mins 14 secs



Track 2 - AutoCAD 2D

Track Winner: Ben Rand
Winning Score: 100% in 9 mins 40 secs

Top 10 Contestants:


No. Completed: 118
Group Average: 47% in 11 mins 21 secs



Track 3 - AutoCAD Civil 3D

Track Winner: Steve Boon
Winning Score: 90% in 9 mins 30 secs

Top 10 Contestants:


No. Completed: 36
Group Average: 48% in 10 mins 50 secs



Track 4 - AutoCAD Plant 3D

Track Winner: David Wolfe
Winning Score: 88% in 9 mins 30 secs

Top 10 Contestants:



No. Completed: 10
Group Average: 42% in 11 mins 15 secs



Track 5 - BIM Skills

Track Winner: Brian Smith
Winning Score: 75% in 9 mins 55 secs

Top 10 Contestants:



No. Completed: 10
Group Average: 49% in 7 mins 6 secs



Track 6 - Inventor

Track Winner: Ryan Johnson
Winning Score: 95% in 12 mins 0 secs

Top 10 Contestants:


No. Completed: 38
Group Average: 45% in 10 mins 51 secs




Track 7 - Navisworks

Track Winner: Ryan Fintel
Winning Score: 90% in 11 mins 30 secs

Top 10 Contestants:


No. Completed: 11
Group Average: 50% in 8 mins 27 secs



Track 8 - Revit Architecture

Track Winner: Bob Mihelich
Winning Score: 96% in 11 mins 35 secs

Top 10 Contestants:


No. Completed: 84
Group Average: 50% in 11 mins 16 secs



Track 9 - Revit MEP

Track Winner: David Rushforth
Winning Score: 100% in 11 mins 0 secs

Top 10 Contestants:


No. Completed: 18
Group Average: 63% in 10 mins 50 secs



Track 10 - Revit Structure

Track Winner: Kristopher Godfrey
Winning Score: 78% in 5 mins 50 secs

Top 10 Contestants:



No. Completed: 17
Group Average: 51% in 9 mins 34 secs



 Popular Tracks

The most popular tracks, in order of completed tests, were as follows:

AutoCAD 2D - 118 results
Revit Architecture - 84 results
Inventor - 38 results
AutoCAD Civil 3D - 36 results
Revit MEP - 18 results
Revit Structure - 17 results
3ds Max - 14 results
Navisworks - 11 results
AutoCAD Plant 3D - 10 results
BIM Skills - 10 results

Total = 356 results


Range of scores

Interestingly, across the 10 tracks, we saw scores ranging from 0% to 100%. Here is a summary of both ends of the performance scale:

3 x 100% scores (2 x AutoCAD 2D, 1 x Revit MEP).
1 x 0% score (Revit MEP), 16 x < 20% scores (1 x 3ds Max, 1 x Navisworks, 1 x RST, 1 x Plant 3D, 2 x Inventor, 2 x Civil 3D, 3 x AutoCAD 2D, 5 x RAC).

Honourable mention

Special recognition goes to John Fout (@scribldogomega) who placed in the top 10 for 5 tracks. An amazing all-around performance!


Boys vs Girls

Finally, let's take a look at the demographic breakdown of the competition. Out of 356 contestants, 315 were male and 41 female. The average overall performance for each group breaks down like this:

Girls: 48% in 10 mins 58 secs
Boys: 48% in 10 mins 51 secs



So, that's Top DAUG finished for another year. A thoroughly enjoyable 3 days at AU2013. 356 completed tests, across 10 popular tracks. Once again, the overall standard was extremely high, with some outstanding individual performances from our track winners and top 10 contestants.

Congratulations to all our winners. Thanks to the AUGI team for all their support, with a special mention for AUGI President, David Harrington. Lastly, a big thank you to everyone who took part in this year's contest.

See you all in 2014 at Mandalay Bay for more fun & games!

R