Below you will find all general product updates made available in Inspera in 2023. Please note that details on planned releases are added approximately 10 days ahead of the release and there might be changes running up until the release date.
We will only communicate updates that are available to all customers, and improvements made in collaboration with individual customers will be communicated directly when ready for use. This is to be sure the solution scales and solves the intended need before everyone adopts new functionality.
For more insight into what the Inspera team is working on, please visit our public roadmap.
You can also subscribe to our monthly Release Notes by email by Registering Here.
Quick navigation:
- April release - Planned release April 12th, 2023
- March release - Released March 3rd, 2023
- February release - Released February 3rd, 2023
- January release - Released January 6th, 2023
Coming in April (Planned release April 12th, 2023)💡
Safe Exam Browser updates for Mac and Windows
A blocker issue has been found in SEB 3.2.3 for Mac. We are now waiting for a new SEB version for testing. Assuming we get this in time, and it passes internal testing, this will be supported in April. Due to this delay, we will wait to remove support for older Mac versions until the July release.
To learn more about the System Requirements, visit our Help Center article System Requirements.
|
Removing support | Adding support | Default version |
Mac | - | 3.2.4 | 3.2.4 |
Windows | 3.1.1 | - | 3.4.1 |
Assessment Paths
We are excited to present our new pillar in Inspera Assessment!
Assessment paths support continuous assessment in either a formative (assessment for learning) or summative (assessment for learning) context. It allows you to group components of an assessment together to be marked individually and graded as a whole. It also supports multiple assessments being grouped in the same way.
There are two use cases for Assessment Paths:
- It allows educators to conduct a series of assessments to have their candidates evaluated based on their performance over a period of time. For instance, all the assessments planned for a course in a term or semester can be grouped together, each with its unique weight to provide an overall grade to the candidate in the course.
- Alternatively, it can be used for summative assessments consisting of several components, each with its own time window, duration, graders, and feedback. A good example here is language proficiency assessments that consist of 4 parts for assessing reading, writing, listening, and speaking skills. In this case reading, writing and listening can be conducted as one continuous multi-part assessment (same time window), while the speaking test is scheduled at a different time.
Assessment path’s open beta supports the following:
- Schedule assessment with components that are graded as a whole.
- Candidates can be added to the whole assessment or individual components for flexibility, while contributors can be added to the whole assessment.
- Grading committees can be added to the whole assessment.
- Auto calculation of grade on whole assessment (threshold) with each individual component’s marks weighted.
- Candidates can view the summary of results and feedback for the whole assessment and individual components in the candidate dashboard.
- A new candidate dashboard that provides a better overview of all assessment activities to “house” all assessments in one place. It also comes in a new improved design.
- New Open API to create multi-component assessments.
The new candidate dashboard will coexist with the old dashboard during the beta period and candidates will be directed to the new dashboard for Assessment path until we make the full transition to it.
The Assessment path comes with a new and intuitive and accessible user interface that shows where we are heading in improving our user experience:
New Assessment path settings page
New Candidate dashboard displaying a single test.
---
March 9th Release Webinar
We are excited to invite you to join us for our upcoming webinar on March 9th, where we will be announcing our latest release. This monthly webinar is an excellent opportunity to learn more about our products and services and to hear from our expert team members about the latest developments.
Heads up💡
Inspera Exam Portal and Smarter Proctoring 💻
- IEP 1.15.6 (ETA April) adds an Event to the Screen Recording every time a third-party application is used to improve post-session reviews.
- IEP 1.15.4 has enhanced security when using Moderate Security Policy on Windows: IEP terminates when the focus is lost to a third-party application for the first time, and not the second time as it used to be.
To learn more about IEP’s Moderate Security Policy, visit the Help Center article Inspera Exam portal (IEP): Windows Security Screen Access for Moderate Security Tests.
Safe Exam Browser updates for Windows
SEB 3.4.1 for Windows is now enabled for all and is the default version that candidates download when downloading SEB on Windows from Inspera. We have enabled two displays in SEB 3.4.0 and 3.4.1 to accommodate candidates requiring external screens. Support for 3.0.1 Windows has ended.
Windows updates
Removing support | Adding support | Default version |
3.0.1 | 3.4.1 | 3.4.1 |
New in this release🚀
Inspera Exam Portal Recording options
To balance integrity and data privacy on Open Book Assessments, we have made some requested changes to IEP when Screen-Only recording is used for automated proctoring.
The following steps in the Inspera Exam Portal flow are disabled to not require candidate access to camera and microphone:
- Camera/Mic page
- Photo page
- ID page
To learn more about IEP’s recording options, visit the Help Center article Test setup - Inspera Smarter Proctoring.
New workspace names in Marking 2.0 for Norwegian and Swedish customers
In collaboration with Higher Ed institutions in Norway and Sweden, we have decided to rename workspaces in Marking 2.0. The goal is to make it easier for the Graders to understand which workspace they should use to perform a given task or find the information they are looking for.
Existing workspace name (NO/SE) | New workspace name (NO) | New workspace name (SE-HE/SE-K12) |
Oversikt/Bedömningsöversikt | Kandidatoversikt | Studentöversikt/Elevöversikt |
Resultater/Resultat | Resultatoversikt | Resultatöversikt |
Vurdering/Bedömning | Vurder oppgavesvar | Bedöm uppgiftssvar |
Begrunnelser/Motivering av betyg | Begrunnelser | Motivering av resultat |
Oppgaver/Uppgifter | Oppgavesett | Uppgiftsöversikt |
Resultatoversikt/Resultatöversikt för student_elev | Kandidatrapport | Studentrapport/Elevrapport |
Content production (Author) ✍🏼
Polygon shape in Hotspot question type now in Open Beta
We’re excited to bring you our new Polygon shape feature to the Hotspot question type. Previously, authors were restricted to circles and rectangles as hotspot areas. Now, they are able to draw a precise polygon. A polygon must have at least three points and there is no limit on the maximum number. Each point on the polygon can be manipulated once drawn. If the background image is changed, the polygon can be adapted to fit the new image. This feature allows Authors to use the shape more creatively to outline objects in a range of different subjects including medicine and biology.
The power of this new capability allows you to ensure that only the correct parts of a background image are marked as correct. It requires candidates to be precise in where they click rather than a general area within a circle or square.
This feature is released in Open Beta and requires activation. Please contact the Service Desk to activate. We plan to enable this for all with the September release and recommend getting familiar with the new possibilities.
Test setup (Deliver) 🎛️
Text to Speech now available in Icelandic
We have further expanded the languages that the Text to Speech feature supports to include Icelandic. To change the language, select the ‘Settings’ icon on the test start screen.
To learn more about Text to Speech, visit our Help Center article Text to speech.
Marking💯
Option to show assessment rubric to the candidate during a test
One benefit of Rubrics is that it sets out clear expectations for the candidate by defining the criteria for grading a particular submission. By seeing the rubric, candidates can gain a better understanding of the specific expectations and requirements. With this new addition to our Rubrics functionality, Authors and Planners can choose to let the candidate access the Rubric for a question during the test. The Rubric is available to the candidate as a resource.
To learn more about adding Rubrics as a resource, visit the Help Center article Rubrics overview.
Other Marking improvements:
- When a grader downloaded the “Marks as Excel file” we listed all candidates on the test in the file, regardless of whether they belonged in the Committee the grader was assigned to. Now, the download only includes the relevant candidates for the specific Grader.
---
February 9th Release Webinar
View the February 9th, 2023 Release webinar by visiting this link.
Heads up💡
Safe Exam Browser 3.4.0 for Windows enabled for all
SEB 3.4.0 for Windows is now the default version that candidates download when downloading SEB on Windows from Inspera. A blocker issue has been found in SEB 3.2.2 for Mac, and a fix is in progress, planned for release in April.
To learn more about the System Requirements, visit our Help Center article System Requirements.
Supported SEB versions plan | ||
March release |
April release |
Post April release full list of supported versions |
Windows:
|
Windows:
Mac:
|
Windows:
Mac:
|
Inspera Exam Portal and Smarter Proctoring 💻
Since we released Inspera Exam Portal 1.14.14 back in September 2022, we have continued improving security and integrity for closed book assessments. We now have a major new release, Inspera Exam Portal 1.15.4, that offers many security improvements.
To learn more, please visit our Help Center article Versions of Inspera Exam Portal. You can test the new version in your production tenant together with your existing version. Please make a Service Request to our Service Desk team, and we will enable the new IEP release on a new URL.
What about ChatGPT?
We have just released this blog article outlining strategies to continue supporting Open-Book assessments with integrity:
To learn more, visit the blog post on our website Conducting open book assessments with integrity in a digital world.
New in this release 🚀
Marking💯
Qualitative rubrics in Open Beta
With this release, we are happy to announce that we are adding Qualitative rubrics to our existing rubrics functionality. A Qualitative rubric is used to provide feedback and evaluate subjective, non-numerical aspects of candidate performance. It provides clear, specific criteria for evaluating the quality of the work and helps to ensure consistent and fair grading.
For an Art History assessment, it can look like this:
As you can see there is no point scale used to assess candidate performance with this rubric. When Marking with a Qualitative rubric, the Grader selects a Level of performance for each criteria and provides feedback specifically for that level.
Qualitative rubrics provide several benefits:
- Clarity: Provide clear, specific, and concrete criteria for what candidates are expected to know and be able to do.
- Fairness: Eliminate subjectivity and bias in grading by establishing clear criteria and standards for assessment.
- Feedback: Focus feedback on specific aspects of a candidate’s performance, rather than providing a general grade or score.
- Learning: Promote a growth mindset by encouraging candidates to see assessment as a learning opportunity rather than as a source of stress or anxiety.
- Communication: Facilitate clear communication between teachers and candidates about what is expected of candidates and what they need to do to improve.
To learn more about the different types of Rubrics Inspera Assessment now supports, visit our Help Center article Rubrics overview.
To activate Qualitative rubrics, please contact the service desk.
Other Marking improvements:
- We fixed an issue where the page would fail to display the Candidate report when the Grader navigated between candidates using the arrows in the footer.
- When authoring a question with rubrics, we have now included a link to the rubrics documentation so that authors that are unfamiliar with the rubrics functionality can access the documentation directly from the Author module.
- To align with the options available in question authoring and classic grading we now support negative marks for automatically calculated questions in Marking 2.0.
- It is now possible to update the threshold values in Marking 2.0 for tests that do not have any submissions.
- We fixed an issue where Graders lost their private notes when navigating quickly between candidates.
- If a candidate submitted both a PDF and Inspera Scan Sketches on the same question, only the PDF would appear for the grader in Marking 2.0. We now have a warning message alerting the Graders that there are Inspera Scan Sketches linked to the response.
Content production (Author) ✍🏼
Numerical Simulation in Open Beta
Numerical Simulation brings Authentic STEM assessment into Inspera Assessment. The new question type is based on Numeric Entry, where the question is answered by typing a numerical value that is automatically marked.
Numerical Simulation supports auto-marked math questions based on a program model to declare variables (inputs), formulas, teacher answers (outputs), and response outcomes with scoring. The program model enables randomization and more authentic assessment through real-world programmed scenarios. For STEM and Business subjects, the use cases are endless.
For those of you familiar with STACK, it will be easy to get started with Numerical Simulation as it uses the Maxima language which is also used by STACK.
See an example from the Authoring tool:
To learn more about Numerical Simulation, visit our Help Center article Question Type - Numerical Simulation.
To activate Numerical Simulation, please contact the Service Desk.
Integrations 🔌
Event/webhook for ‘resource_export_complete’ is now in the final/release state and the triggering user is in the regular place for events. Also, note then that this event is exempt from the general rule that triggering users should not be notified of their own events. This event will now only be triggered if the API for resource-download is used.
Test setup (Deliver) 🎛️
Improvements:
Currently by using this API endpoint /v1/users/student, it is possible to create duplicate usernames without getting any errors back. From the May 5 release onwards, creating duplicate user names will not be allowed and an error will be returned in that scenario.
---
Released: January 6th, 2023
Heads up💡
Scheduled downtime January 6th, 2023
During the January 2023 maintenance window (Friday, Jan 6, 21:00-23:00 UTC), Inspera Assessment will undergo a period of planned complete unavailability to carry out maintenance tasks. While candidates logged into tests will be able to continue working offline, all online functionality will be unavailable for some time, and users may be required to log back in when the service is back up. We apologise for the inconvenience. Please avoid scheduling your assessments during this time window.
For additional information about upcoming maintenance, visit our Status page. Learn more
Next release webinar
We have moved our release webinar to February 9th, 9am CET to include our February release in the webinar. Please join by using this link.
New in this release 🚀
Marking 💯
Rubrics are moving to general release
We are moving Rubrics from open beta and are thrilled to offer question-level Rubrics to everyone. With our new rubrics editor, Authors can quickly and easily create different assessment rubrics (point-based, point-ranged, and percentage-ranged). Graders will be able to use the rubric to quickly mark a question and provide feedback to the candidate. The candidate will have access to the rubric and the feedback in the Candidate report.
If you are unfamiliar with assessment rubrics, here are some benefits they provide:
- Articulates expectations to the candidate and lays out a scoring mechanism
- Reduces time spent evaluating
- Minimizes inconsistent marking
- Candidates may quickly receive feedback
- Ease communication about assessment performance
You can learn more about Rubrics in our Help Center article, Rubrics overview.
To activate rubrics please contact the service desk.
Minor improvements in Marking
- We added missing Icelandic translations to our marking tools (Classic and 2.0).
- Previously, if you wanted to update the max score for criteria in a rubric, you had to do that in the criteria column. We saw that users tried to use the level of performance to achieve this, so we now support changing the max score by updating the score for the level of performance.
- We had some inconsistencies in updating feedback and marks when grading using a rubric. (Marks were changed to the previous mark and feedback was disappearing when editing). This is now fixed.
- We had an issue with displaying the rubric to the candidate in the candidate report. This is now fixed.
Test setup and Monitoring 🎛️
Export (CSV) from Monitor to include all the warnings on a candidate
Previously when making an export from Monitor (UI), only the most recent “Warning” was getting populated into the CSV file, preventing the customers from completing due diligence. This has been improved to now include all the existing “Warnings” for each candidate in the CSV export.
To ensure we have no errors with more using the functionality , we require this to be activated by request to the service desk and then we will automatically activate it for all in a later release.