Thank you for using Inspera! Below you will find all general product updates made available in 2023. Please note that details on planned releases are added approximately 10 days ahead of the release, and there might be changes running up until the release date.
- Our regular release notes will be sent at 16:00 CEST on the scheduled maintenance day.
- New features will be available after the maintenance window at 22:00 CEST.
Stay informed and get more out of Inspera:
- For more insight into what the Inspera team is working on, please visit our public roadmap.
- Stay up to date with our latest product releases and updates by subscribing to our monthly Release Notes. You can register here.
- Learn more about our platform and features by registering for upcoming webinars or viewing past webinars on our website here.
Coming in June ☀️
Updated release date: June - Bulk Explanations for all candidates
Within Marking 2.0, send all Explanations to your candidates at once! Previously, Explanations had to be sent one by one, which was time-consuming and tedious. With this new addition, you can now send all the explanations to your candidates in a single step. This will save you time and streamline the process of providing feedback to your candidates.
Updated release date: June - Common Feedback to all the candidates
Provide Feedback statements to all candidates on a test. Previously, this was not possible in Inspera. With the introduction of a new workspace called "Assessment Feedback", you can now create or upload a Feedback statement that will be easily accessible to candidates via their Candidate report. This new workspace not only streamlines the process of providing feedback to candidates but also enhances the overall assessment experience. We hope this new feature will be valuable for institutions and candidates alike.
Additional Grading logic in Multiple Attempts
As of today, the final grade is calculated based on the candidate’s highest score across all attempts. This approach encourages candidates to keep trying until they achieve their best possible score. We are going to additionally support the following grading logic:
- Average: In this grading logic, the final grade is calculated based on the average of all the candidate’s attempts. This approach takes into account the candidate’s overall performance across all attempts and can be useful for assessing overall mastery.
- Latest: In this grading logic, the final grade is based on the candidate’s score on their final attempt only. This approach can be useful for encouraging candidates to continue working on the material until they achieve a satisfactory level of mastery.
One-time and Permanent User Options for Assessment path
As of today, we allow Single Sign-on (SSO) for candidates in the Assessment path. It is useful for candidates who want to log in using their email address through your organization. We will additionally support the following candidate types when adding users to Assessment path:
- One-time users: By using the One-time user option, you can assign one-time users to an Assessment path. They are independent, which means that a one-time user is created per candidate per Assessment Path.
- Permanent users: By using permanent users, candidates can use the same username and password to access all their Assessment paths. It is especially useful when candidates need to enroll for multiple assessments or long-term coursework.
Sort tests in Assessment path
The ability to sort individual tests within an Assessment path provides significant benefits, including aiding in audit purposes and ensuring tests have been properly added. By sorting tests according to date, administrators can easily identify any discrepancies or omissions in the Assessment path.
CSV download for logs in Assessment path
CSV download can be used to export logs data from the Assessment path, which can be used for further data analysis or custom reports.
Predefined feedback per response outcome in Numerical simulations (Open beta)
Within the Numerical simulations question type, we are introducing predefined feedback that delivers tailored responses based on the candidate’s input, without any work from the graders. Authors will be able to create customized feedback for each potential response outcome.
Application Switched Event notice in Inspera Exam Portal (IEP)
In order to further improve the security of the Inspera Exam Portal (IEP) with Open mode, we are introducing a new test event. An Application Switched Event will be created every time a candidate changes focus from IEP to a third-party application and will help customers understand which online resources candidates are accessing during an assessment in Open mode. The event will include a screenshot of the application and a video of the switch. Learn more
This feature is being released as a part of IEP version 1.15.7, and it requires an update to the video player for Inspera Smarter Proctoring. Please contact the service desk to activate this functionality.
- We have enhanced the candidate user interface in Candidate-selected questions by adding clearer instructions on the candidate submissions page, which notifies candidates if they have left any questions unanswered. Learn more
- We have made changes to the Systems check page to disable the real-time logging connection check by default.
Heads up 💡
May Product Release Webinar
Join us for our upcoming webinar on May 9th, where we will be announcing our latest release. This monthly webinar is an excellent opportunity to learn more about our products and services and to hear from our expert team members about the latest developments.
The webinar is scheduled for Tuesday, May 9th, 2023 at 15:00 (UK Time / BST), and you can register by clicking on the link provided below.
Registration link: May Product Release Webinar
Safe Exam Browser (SEB) updates
We will remove support for SEB versions 2.1.4 and 2.3.2 for Mac in the August release. Learn more
New in this release 🚀
Enhancing Hotspot Visibility: Customizable Border Colors and Opacity
One of the challenges that our Authors have faced when constructing questions in which the hotspot areas should be presented to the candidates, is that the visibility of hotspot areas can be limited, particularly when the background image has a dark color. To address this issue, we're introducing a new feature that enables Authors to set the border color and opacity of the hotspot areas. This will make it much easier for candidates to identify the hotspots, regardless of the background color. For more information on best practices for using the Hotspot border and opacity, please visit the Help Center article on Authoring Accessible Questions.
Please note that this feature is in Open Beta and will require activation until September 2023, when it will be enabled for all users in conjunction with the release of polygon hotspot shapes.
Multiple attempts (Open beta)
By enabling Multiple attempts on assessments, formative testing becomes a valuable tool for enhancing test-driven learning and empowering candidates to improve their understanding of key topics. Learn more
Multiple attempts drives assessment-driven learning with automatically-marked questions:
- The planner can schedule assessments allowing for formative testing with multiple attempts so candidates can see how their score improves over time.
- The planner can set the maximum number of attempts and candidates can retake the assessment up to the set limit. The highest score is considered the final result.
- Instant feedback and an updated final score are available to candidates on each submission, with an opportunity to retry/reattempt up to the set limit.
- The automated marks are displayed within the Marking 2.0 tool for questions that can be automatically marked. The grader has the option to override the score manually if necessary.
- The planner can download all attempts per candidate from the Marking 2.0 tool.
- It supports Numerical simulations which allow unique randomized Numerical Questions on every attempt.
- Additional logic like “average”, “latest” for calculating the final score
- Randomization of questions in the question set in each attempt
- Improved accessibility in the new candidate dashboard
- Multiple attempts test within the Assessment path
This feature is only available on test tenants. Please contact the service desk to activate this functionality.
Upcoming tests tab within the Candidate dashboard
Candidate dashboard after all attempts completed
Accessibility improvements in the new Candidate dashboard (Open beta)
In this release, our main focus was on addressing fundamental accessibility hygiene in the new Candidate dashboard, particularly related to four key annotations: "Skip link," "Landmarks," "Heading with page title," and "Alt attribute." These elements play a significant role in enhancing the screen reader experience for users. There will be continued work on accessibility to improve it further.
Other Deliver improvements
- We fixed an issue where candidates using Multiple Attempts were directed to an error page instead of the test when starting an attempt from the dashboard. Previously, candidates had to navigate back and restart the test.
- We fixed an issue where candidates using Multiple attempts were restricted from accessing the test due to a synchronization issue in the system in some cases.
- We have enhanced the user interface for displaying the details of individual tests within an Assessment path. The new interface allows candidates to view the test details directly on the same page, eliminating the need to navigate to different pages.
- We had identified an issue where the candidate was not getting redirected to the section with details of the relevant individual test which they clicked on "Click here to get ready" for a specific individual test. This meant that candidates had to manually access the detailed information they needed to prepare for the test. We have fixed the issue by adding an auto scroll to automatically move the candidate's view to the relevant individual test section on the page, without the need for manual scrolling by the candidate.
- We have added in-product help links in Assessment path so that users can access specific help articles that are relevant to their current task or functionality.
- We added support for removing individual tests from the Assessment path through open APIs. Please refer to our Open API documentation for more details.
- We fixed an issue in our product where the width of the left menu bar in Assessment path was not adjusted correctly, making some of the text illegible and inaccessible to users.
- API-endpoints serving documentation (schema) for resource-download-requests no longer require authentication.
- Added support for editing externalUserId of type LTI via admin-ui - useful if LTI is used without any synchronization with SAML-SSO.
- A 500-error for requesting QTI-content (/api/content/qti) without any live revision is changed to a 404.
- Added support for handling ladok-messages ‘out of sequence’
Heads up 💡
April Product release webinar
Join us for our upcoming webinar on April 13th, where we will be announcing our latest release. This monthly webinar is an excellent opportunity to learn more about our products and services and to hear from our expert team members about the latest developments.
The webinar is scheduled for Thursday 13 April 2023 at 15:00 (UK Time / BST), and you can register by clicking on the link provided below.
Registration link: April Product release webinar
Change to release dates
Starting this April, we'll be updating our product release schedule to better serve you. Instead of releasing on the first Friday evening of the month, we'll be moving the release date to the following Tuesday. However, there may be some months where this falls on a Wednesday due to major bank holidays. Don't worry - this change won't affect live service, and any maintenance changes that might impact live services will be implemented after 22:00 GMT on the same day.
|April release||Wednesday, April 12th, 2023||Postponed to Wednesday for Easter bank holidays|
|May release||Tuesday, May 9th|
|June release||Tuesday, June 6th|
|July release||Tuesday, July 4th|
|August release||Tuesday, August 8th|
|September release||Tuesday, September 5th|
|October release||Tuesday, October 3rd|
|November release||Tuesday, November 7th|
|December release||Tuesday, December 5th|
|January release||Wednesday, January 10th, 2024||Postponed to Wednesday for the Holiday season|
We'll keep you updated on the status of the deployment via our product status page (https://status.inspera.no/), and we'll also provide release notes, showcases, and webinars to support each deployment. Below are the new release dates for 2023, and we'll let you know of any changes at least a month in advance.
Safe Exam Browser (SEB) updates for Windows and Mac
- Windows: Ended support for 3.1.1
- Mac: We have added support for 3.2.5 and this is now the default version when downloading SEB for Mac from Inspera. SEB reported a security issue in version 3.2.4, which is fixed in 3.2.5, and we are therefore not adding support for 3.2.4 for Mac.
New in this release 🚀
Numerical Simulations question type (Open Beta)
We have enabled multiple candidate response fields (interactions) in the question type, allowing multiple questions using the same Program Model. With this release, we are adding support for Error Carried Forward. Error Carried Forward is a useful addition that can help ensure that candidates are appropriately rewarded with partial marks for their knowledge and approach, even if they make mistakes in previous parts of the assessment. Learn more
The changes done to support this have required larger data model changes that cannot be supported in current Numerical Simulations questions. Editing existing questions post the April release will lead to the questions becoming broken unless the Program Model and Response outcomes are also re-authored.
Error Carry Forward must be enabled separately from the Numerical Simulations question type; please contact the service desk to activate this functionality.
Important: We will continue to improve Numerical Simulations with support for Predefined feedback per response outcome in the next stage of development. However, this may require some changes to the system which could break existing questions while the question type is still in Beta. We aim to take it out of Beta this Q3, but until then, this question type should not be used in real assessments.
Other Authoring improvements
- Essay questions: Fixed bug where choosing the Basic toolbar did not always save, leading to the Default toolbar being used instead.
Assessment Path (Open Beta)
With the addition of Assessment paths, we aim to give educators more flexibility in the way they deliver assessments. Learn more
Assessment path supports continuous assessment in either a formative (assessment for learning) or summative (assessment for learning) context. It allows you to group components of an assessment together to be marked individually and graded as a whole. It also supports multiple assessments being grouped in the same way.
There are two use cases for Assessment Path:
- It allows educators to conduct a series of assessments to have their candidates evaluated based on their performance over a period of time. For instance, all the assessments planned for a course in a term or semester can be grouped together, each with its unique weight to provide an overall grade to the candidate in the course.
- Alternatively, it can be used for summative assessments consisting of several components, each with its own time window, duration, graders, and feedback. A good example here is language proficiency assessments that consist of 4 parts for assessing reading, writing, listening, and speaking skills. In this case reading, writing and listening can be conducted as one continuous multi-part assessment (same time window), while the speaking test is scheduled at a different time.
Assessment path’s open beta supports the following:
- Schedule assessments with components that are graded as a whole.
- Candidates can be added to the whole assessment or individual components for flexibility, while contributors can be added to the whole assessment.
- Grading committees can be added to the whole assessment.
- Auto calculation of grade on whole assessment (threshold) with each individual component’s marks weighted.
- Candidates can view the summary of results and feedback for the whole assessment and individual components in the Candidate dashboard.
- A new Candidate dashboard that provides a better overview of all assessment activities to “house” all assessments in one place. It also comes in a new improved design.
- New Open API to create multi-component assessments.
Note: The Assessment Path requires the new Candidate dashboard. The new Candidate dashboard will be in beta throughout 2023, and we don’t recommend using it in live summative assessments before its general release. However, it can be used on low-stake formative tests from Q3.The new Candidate dashboard will coexist with the old dashboard during the beta period and candidates will be directed to the new dashboard for Assessment path until we make the full transition to it.
Set a date and time for the release of Candidate report (Open Beta)
With this release, we are bringing more control over when the Candidate report is made available. A Planner can now choose a date and time for the release of the Candidate report when setting up a new test in Deliver. Learn more
Please contact the service desk to activate this functionality.
Marks as Excel (Open Beta)
We're excited to announce an expansion of our offline marking functionality with the addition of a new feature that allows you to import marks/points from Excel. In Marking 2.0, you can now download "Marks as Excel file". Graders can use the downloaded Excel file to input marks for the candidates on the test. To import marks from the Excel file, simply go to "Import" and choose "Marks from Excel," then select the file containing the marks. This new feature will enhance the marking experience and save time for graders. Learn more
Please contact the service desk to activate this functionality.
Other Grading improvements
- Fixed issue where Marking 2.0 was not playing nice with Firefox causing hidden and unresponsive menus. This has now been fixed.
Inspera Exam Portal 💻
Additional security measures in Inspera Exam Portal (IEP) to control clipboard usage
We have made further enhancements to control how the browser clipboard can be used with Inspera Exam Portal (IEP) in Strict and Moderate modes. This is to ensure that only information deriving from within IEP is available to copy and paste during a test.
When enabled, this functionally will prevent candidates from carrying out certain actions:
- For Rich Text essay questions: using the copy button on the essay editor toolbar
- For Rich Text essay questions: using the Context Menu (by right-clicking) to copy, paste or cut
- Running a programmed script to insert pre-prepared answers into the assessment via the clipboard. If a script is used it will cause IEP to shut down as a preventative measure, and you’ll see that a warning is logged.
The standard keyboard shortcuts for copy, paste, and cut are still available to candidates for these questions and tooltips are included as a guide. Learn more
Please contact the service desk to activate this functionality.
Allow a candidate to change WiFi in the Inspera Exam Portal (IEP)
Candidates can be granted permission to change their WiFi connection during their assessment while using the Inspera Exam Portal (IEP). This is applicable in all modes (Strict, Moderate, and Open) and is compatible with Windows and macOS.
The planner can enable/disable this option by selecting the new ‘Allow candidates to change WiFi’ checkbox under the Security settings of the test. When enabled, candidates can select the WiFi icon in IEP and choose their preferred network from the menu. Learn more
- Support for resolving Test Events based on external IDs other linked to systems other than API
APIs that already support referencing a Test Event based on an external ID now also support providing a “externalSystem” parameter that can be used for lookups on other external systems than the default of “API”. This can be used to e.g. do API calls based on the external ID of a test that was created via LTI.
- Bug fix for the Content APIs
The Content APIs would in some cases return an HTTP 500 error when being called without permission to the object being referenced. These APIs will now correctly return an HTTP 403 permission denied response instead.
March 9th Release Webinar
We are excited to invite you to join us for our upcoming webinar on March 9th, where we will be announcing our latest release. This monthly webinar is an excellent opportunity to learn more about our products and services and to hear from our expert team members about the latest developments.
Inspera Exam Portal and Smarter Proctoring 💻
- IEP 1.15.6 (ETA April) adds an Event to the Screen Recording every time a third-party application is used to improve post-session reviews.
- IEP 1.15.4 has enhanced security when using Moderate Security Policy on Windows: IEP terminates when the focus is lost to a third-party application for the first time, and not the second time as it used to be.
To learn more about IEP’s Moderate Security Policy, visit the Help Center article Inspera Exam portal (IEP): Windows Security Screen Access for Moderate Security Tests.
Safe Exam Browser updates for Windows
SEB 3.4.1 for Windows is now enabled for all and is the default version that candidates download when downloading SEB on Windows from Inspera. We have enabled two displays in SEB 3.4.0 and 3.4.1 to accommodate candidates requiring external screens. Support for 3.0.1 Windows has ended.
|Removing support||Adding support||Default version|
New in this release🚀
Inspera Exam Portal Recording options
To balance integrity and data privacy on Open Book Assessments, we have made some requested changes to IEP when Screen-Only recording is used for automated proctoring.
The following steps in the Inspera Exam Portal flow are disabled to not require candidate access to camera and microphone:
- Camera/Mic page
- Photo page
- ID page
To learn more about IEP’s recording options, visit the Help Center article Test setup - Inspera Smarter Proctoring.
New workspace names in Marking 2.0 for Norwegian and Swedish customers
In collaboration with Higher Ed institutions in Norway and Sweden, we have decided to rename workspaces in Marking 2.0. The goal is to make it easier for the Graders to understand which workspace they should use to perform a given task or find the information they are looking for.
|Existing workspace name (NO/SE)||New workspace name (NO)||New workspace name (SE-HE/SE-K12)|
|Vurdering/Bedömning||Vurder oppgavesvar||Bedöm uppgiftssvar|
|Begrunnelser/Motivering av betyg||Begrunnelser||Motivering av resultat|
|Resultatoversikt/Resultatöversikt för student_elev||Kandidatrapport||Studentrapport/Elevrapport|
Content production (Author) ✍🏼
Polygon shape in Hotspot question type now in Open Beta
We’re excited to bring you our new Polygon shape feature to the Hotspot question type. Previously, authors were restricted to circles and rectangles as hotspot areas. Now, they are able to draw a precise polygon. A polygon must have at least three points and there is no limit on the maximum number. Each point on the polygon can be manipulated once drawn. If the background image is changed, the polygon can be adapted to fit the new image. This feature allows Authors to use the shape more creatively to outline objects in a range of different subjects including medicine and biology.
The power of this new capability allows you to ensure that only the correct parts of a background image are marked as correct. It requires candidates to be precise in where they click rather than a general area within a circle or square.
This feature is released in Open Beta and requires activation. Please contact the Service Desk to activate. We plan to enable this for all with the September release and recommend getting familiar with the new possibilities.
Test setup (Deliver) 🎛️
Text to Speech now available in Icelandic
We have further expanded the languages that the Text to Speech feature supports to include Icelandic. To change the language, select the ‘Settings’ icon on the test start screen.
To learn more about Text to Speech, visit our Help Center article Text to speech.
Option to show assessment rubric to the candidate during a test
One benefit of Rubrics is that it sets out clear expectations for the candidate by defining the criteria for grading a particular submission. By seeing the rubric, candidates can gain a better understanding of the specific expectations and requirements. With this new addition to our Rubrics functionality, Authors and Planners can choose to let the candidate access the Rubric for a question during the test. The Rubric is available to the candidate as a resource.
To learn more about adding Rubrics as a resource, visit the Help Center article Rubrics overview.
Other Marking improvements:
- When a grader downloaded the “Marks as Excel file” we listed all candidates on the test in the file, regardless of whether they belonged in the Committee the grader was assigned to. Now, the download only includes the relevant candidates for the specific Grader.
February 9th Release Webinar
View the February 9th, 2023 Release webinar by visiting this link.
Safe Exam Browser 3.4.0 for Windows enabled for all
SEB 3.4.0 for Windows is now the default version that candidates download when downloading SEB on Windows from Inspera. A blocker issue has been found in SEB 3.2.2 for Mac, and a fix is in progress, planned for release in April.
To learn more about the System Requirements, visit our Help Center article System Requirements.
|Supported SEB versions plan|
Post April release full list of supported versions
Inspera Exam Portal and Smarter Proctoring 💻
Since we released Inspera Exam Portal 1.14.14 back in September 2022, we have continued improving security and integrity for closed book assessments. We now have a major new release, Inspera Exam Portal 1.15.4, that offers many security improvements.
To learn more, please visit our Help Center article Versions of Inspera Exam Portal. You can test the new version in your production tenant together with your existing version. Please make a Service Request to our Service Desk team, and we will enable the new IEP release on a new URL.
What about ChatGPT?
We have just released this blog article outlining strategies to continue supporting Open-Book assessments with integrity:
To learn more, visit the blog post on our website Conducting open book assessments with integrity in a digital world.
New in this release 🚀
Qualitative rubrics in Open Beta
With this release, we are happy to announce that we are adding Qualitative rubrics to our existing rubrics functionality. A Qualitative rubric is used to provide feedback and evaluate subjective, non-numerical aspects of candidate performance. It provides clear, specific criteria for evaluating the quality of the work and helps to ensure consistent and fair grading.
For an Art History assessment, it can look like this:
As you can see there is no point scale used to assess candidate performance with this rubric. When Marking with a Qualitative rubric, the Grader selects a Level of performance for each criteria and provides feedback specifically for that level.
Qualitative rubrics provide several benefits:
- Clarity: Provide clear, specific, and concrete criteria for what candidates are expected to know and be able to do.
- Fairness: Eliminate subjectivity and bias in grading by establishing clear criteria and standards for assessment.
- Feedback: Focus feedback on specific aspects of a candidate’s performance, rather than providing a general grade or score.
- Learning: Promote a growth mindset by encouraging candidates to see assessment as a learning opportunity rather than as a source of stress or anxiety.
- Communication: Facilitate clear communication between teachers and candidates about what is expected of candidates and what they need to do to improve.
To learn more about the different types of Rubrics Inspera Assessment now supports, visit our Help Center article Rubrics overview.
To activate Qualitative rubrics, please contact the service desk.
Other Marking improvements:
- We fixed an issue where the page would fail to display the Candidate report when the Grader navigated between candidates using the arrows in the footer.
- When authoring a question with rubrics, we have now included a link to the rubrics documentation so that authors that are unfamiliar with the rubrics functionality can access the documentation directly from the Author module.
- To align with the options available in question authoring and classic grading we now support negative marks for automatically calculated questions in Marking 2.0.
- It is now possible to update the threshold values in Marking 2.0 for tests that do not have any submissions.
- We fixed an issue where Graders lost their private notes when navigating quickly between candidates.
- If a candidate submitted both a PDF and Inspera Scan Sketches on the same question, only the PDF would appear for the grader in Marking 2.0. We now have a warning message alerting the Graders that there are Inspera Scan Sketches linked to the response.
Content production (Author) ✍🏼
Numerical Simulation in Open Beta
Numerical Simulation brings Authentic STEM assessment into Inspera Assessment. The new question type is based on Numeric Entry, where the question is answered by typing a numerical value that is automatically marked.
Numerical Simulation supports auto-marked math questions based on a program model to declare variables (inputs), formulas, teacher answers (outputs), and response outcomes with scoring. The program model enables randomization and more authentic assessment through real-world programmed scenarios. For STEM and Business subjects, the use cases are endless.
For those of you familiar with STACK, it will be easy to get started with Numerical Simulation as it uses the Maxima language which is also used by STACK.
See an example from the Authoring tool:
To learn more about Numerical Simulation, visit our Help Center article Question Type - Numerical Simulation.
To activate Numerical Simulation, please contact the Service Desk.
Event/webhook for ‘resource_export_complete’ is now in the final/release state and the triggering user is in the regular place for events. Also, note then that this event is exempt from the general rule that triggering users should not be notified of their own events. This event will now only be triggered if the API for resource-download is used.
Test setup (Deliver) 🎛️
Currently by using this API endpoint /v1/users/student, it is possible to create duplicate usernames without getting any errors back. From the May 5 release onwards, creating duplicate user names will not be allowed and an error will be returned in that scenario.
Scheduled downtime January 6th, 2023
During the January 2023 maintenance window (Friday, Jan 6, 21:00-23:00 UTC), Inspera Assessment will undergo a period of planned complete unavailability to carry out maintenance tasks. While candidates logged into tests will be able to continue working offline, all online functionality will be unavailable for some time, and users may be required to log back in when the service is back up. We apologise for the inconvenience. Please avoid scheduling your assessments during this time window.
For additional information about upcoming maintenance, visit our Status page. Learn more
Next release webinar
We have moved our release webinar to February 9th, 9am CET to include our February release in the webinar. Please join by using this link.
New in this release 🚀
Rubrics are moving to general release
We are moving Rubrics from open beta and are thrilled to offer question-level Rubrics to everyone. With our new rubrics editor, Authors can quickly and easily create different assessment rubrics (point-based, point-ranged, and percentage-ranged). Graders will be able to use the rubric to quickly mark a question and provide feedback to the candidate. The candidate will have access to the rubric and the feedback in the Candidate report.
If you are unfamiliar with assessment rubrics, here are some benefits they provide:
- Articulates expectations to the candidate and lays out a scoring mechanism
- Reduces time spent evaluating
- Minimizes inconsistent marking
- Candidates may quickly receive feedback
- Ease communication about assessment performance
You can learn more about Rubrics in our Help Center article, Rubrics overview.
To activate rubrics please contact the service desk.
Minor improvements in Marking
- We added missing Icelandic translations to our marking tools (Classic and 2.0).
- Previously, if you wanted to update the max score for criteria in a rubric, you had to do that in the criteria column. We saw that users tried to use the level of performance to achieve this, so we now support changing the max score by updating the score for the level of performance.
- We had some inconsistencies in updating feedback and marks when grading using a rubric. (Marks were changed to the previous mark and feedback was disappearing when editing). This is now fixed.
- We had an issue with displaying the rubric to the candidate in the candidate report. This is now fixed.
Test setup and Monitoring 🎛️
Export (CSV) from Monitor to include all the warnings on a candidate
Previously when making an export from Monitor (UI), only the most recent “Warning” was getting populated into the CSV file, preventing the customers from completing due diligence. This has been improved to now include all the existing “Warnings” for each candidate in the CSV export.
To ensure we have no errors with more using the functionality , we require this to be activated by request to the service desk and then we will automatically activate it for all in a later release.