Product updates (release notes) 2024

Thank you for using Inspera! Below you will find all general product updates made available in 2024. Please note that details on planned releases are added approximately 10 days ahead of the release, and there might be changes running up until the release date.

Stay informed and get more out of Inspera:

  • For more insight into what the Inspera team is working on, please visit our public roadmap.
  • Stay up to date with our latest product releases and updates by subscribing to our monthly Release Notes. You can register here.
  • Learn more about our platform and features by registering for upcoming webinars or viewing past webinars on our website here.

   

February release - Released February 6th, 2024

Heads Up💡

Inspera API Policy Update

This month we have published a new API policy. This policy sets out the terms that apply when a customer uses, makes contact with, or interacts with the external facing Inspera APIs. The policy can be found under the API section in the Inspera Help Center.

A heads up:  As mentioned in the API documentation, if an API is being called with a request rate that exceeds the maximum allowed rate (as defined in the API policy), an error message will be returned indicating that the request rate has been exceeded. Based on monitoring we see that most usage is already within these limits. We are in direct dialogue with customers where this is not the case. Starting in Q2 this year we will gradually roll out active enforcement of these limits across all API users

New in this release 🚀

Welsh Language Support for Candidate Interface

We now support the Welsh language for the candidate interface. This can be activated for your tenant by reaching out to our service desk. 

Welsh for the Admin interface is also on its way but will be ready later this spring. Stay tuned for updates!

Author ✏️

Numerical Simulation: Now in General Availability!

Numerical Simulation has moved from Open Beta to General Availability. This question type still requires activation via the Service Desk. 

Improvements and fixes made to Numerical simulation with this release:

  • When enabling the question type, the functionality for adding rich feedback to candidates is enabled automatically. No need for separate activation
  • Numerical simulation is now working in combination with ‘Section pulling / selection’
  • Fixed issue with incorrect percentage values displayed in the correct answer when Relative tolerance was used, and improved the readability of the correct answer 
  • Fixed issue where settings set under Options did not save when saving the question

Deliver 🤖

Open API for Test Activation

Customers can now enable immediate visibility of on-demand assessments for candidates without manual intervention. 

Our new API (ref Open API for details) can be accessed through a simple POST request - POST /v1/test/{testid}/activate. In case of configuration errors, the API responds with a clear message and error code 400. In successful scenarios, expect a 200 response, confirming the operation's success. 

Unlimited Attempts on Multiple Attempts

Planners can now enable unlimited attempts on a test utilizing Multiple Attempts. Candidates can take the test as often as needed - providing a more flexible and personalized learning experience.

Improvements and fixes made to Assessment Path with this release:

  • Previously, the steps between the Assessment Path and the individual tests were unclear. Now, clicking on a Parent ID directs users to the "Setup" tab in the Assessment Path UI, where they have visibility of individual tests under the Assessment Path. 
  • Previously, disabling grading on the Assessment Path could be confusing due to the presence of test weights. Now, when grading is disabled, test weights are no longer shown. Users will only see the option to set test weights when "Calculation of points" is set as "Test Weights" in the Assessment Path's UI.

Grade 💯

  • Improved user experience in Multiple Attempts and Assessment Paths. We have removed the "Download marks as CSV" option on the top level test in the Marking tool. As there are no marks associated with the top level test, this option is not required.
  • The default grading option can now be configured per tenant, so if you prefer “Do not use grade scale” as the default option when creating an test, please reach out to our Service Desk. More information on grading scales can be found in the Help Center article, Grading scales.
  • Removed the “Grade” section in the Grader’s view of the Candidate Report when the feature to “Publish the final grade” is not available is not enabled on the tenancy.

Accessibility enhancements🔍🖱️🔊

  • It is now possible to select “View in Submission” in the Candidate Report preview by clicking the “Tab” key
  • The “Print” option in the Candidate Report is now clickable without needing to hover it with the mouse.
  • Screen readers can now read the “View with Rubric” option in the Candidate Report.
  • The three links in the upper right of the Planner Dashboard header menu (Notifications, Settings, and Help), now have an accessible name for screenreader users.
January release - Released January 9th, 2024

What’s coming up in January 

New in this release  🚀

Author ✏️

Authoring improvements

  • Numerical Simulations question type (Beta): 
    • We have enhanced the user experience by ensuring that authors are now informed of issues, such as program model complexity exceeding the capabilities of the candidate test player, at the time of compiling the program model.
    • As the candidates can only input their answers using numbers, all authored correct answers must be in number format (as opposed to eg. fraction format). To facilitate authoring, correct answers are now automatically converted to numbers where possible. For instance, if a variable yields the value ⅛, it is converted to 0.125.

Note: if the correct answer variable can take a different number of decimals each time a number is generated, we recommend allowing for Absolute or Relative Tolerance to ensure the candidates are not penalized for rounding.

Deliver 🤖

Multiple Attempts is now Generally Available!

We have worked closely with our customers and early adopters to make several improvements over time, and are launching Multiple Attempts as generally available! We will continue making improvements based on market feedback. 

By enabling Multiple Attempts on assessments, formative testing becomes a valuable tool for enhancing test-driven learning and empowering candidates to improve their understanding of key topics.  Learn more in the Help Center article, Multiple attempts.

Multiple Attempts drives test-driven learning with automatically-marked questions:

  • The planner can schedule assessments allowing for formative testing with multiple attempts so candidates can see how their score improves over time. 
  • The planner can set the maximum number of attempts and candidates can retake the assessment up to the set limit. 
  • The grading logic for final results can be configured to highest, average or last score. 
  • Instant feedback and an updated final score are available to candidates on each submission, with an opportunity to retry/reattempt up to the set limit. 
  • The automated marks show in the Marking 2.0 tool as for other automatically-marked questions with the ability to also grade the overall test.
  • The planner can download all attempts per candidate from the Marking 2.0 tool. 
  • It supports Numerical simulations which allow unique randomized Numerical Questions on every attempt.
  • It supports the randomized question order, preventing reliance on memorization and promoting independent engagement with the content.
  • It supports timed attempts, which enforces a duration limit on every attempt. 

The New Candidate Dashboard is now Generally Available! 

The new dashboard is a unified house for not only summative exams but also formative assessments. It has been designed with the following aspirations:

  • Accelerated Learning: It drives the focus back to learning with bespoke user experience for our formative capabilities Multiple Attempts and Assessment Path
  • Inclusivity: It is being designed with accessibility in mind to allow inclusive adoption in compliance with AAA/AA industry standards. 
  • Usability: The redesign prioritizes an intuitive candidate experience, informed by customer feedback and usability studies. It introduces a refreshed global menu, an onboarding tour, and a default test ordering system emphasizing tests needing immediate attention at the top.
  • Dashboard Regular Assessment Assessment Path, Multiple Attempts Usability Improvements 
    Existing  :thumbsup: :x: :x:
    New :thumbsup: :thumbsup: :thumbsup:

Learn more in the Help Center article, Candidate dashboard.

Other Deliver improvements

  • Setting up of Multiple Attempts: We've enhanced the usability of Multiple Attempts by streamlining the user interface. Now, when Multiple Attempts are enabled, any incompatible settings are automatically made unavailable, reducing the risk of human error. This improvement ensures that your focus remains on configuring the essential settings needed for Multiple Attempts to function seamlessly. The list of settings that are available with Multiple Attempts have been specified here
  • The maximum attempts can now be directly modified using keyboard input, enhancing user convenience beyond the previous limitation of utilizing arrow keys. 
  • Any changes in test order (“reorder test”) in an Assessment Path were not promptly reflected on the setup page. Users now enjoy a seamless and accurate display without manual refreshes.

Grade 💯

  • Originality Report is now available. More information can be found can be found under the new Originality category.

Other Grading improvements

  • The implies math symbol (⟹) is now correctly visualized in Marking.

Inspera Integrity Browser (previously Inspera Exam Portal) 💻

Version 1.16.0 released and renamed Inspera Integrity Browser

As mentioned in the last two release notes, Inspera has made two product name changes:

  1. Inspera Exam Portal (IEP) is now named Inspera Integrity Browser (IIB)
  2. Inspera Smarter Proctoring (ISP) is now named Inspera Proctoring (IP)

We are delighted to announce that, as of the January 2024 release, the products mentioned above have now been rebranded. You may notice these updates are reflected in both our product and support information going forward.

Version 1.16.0 (to be released during w/c 15th January) is the first IIB version to include the new name. There will be no name changes for the versions of IEP that we also currently support (1.14 and 1.15 series). 

There are some instances where the Inspera Integrity Browser (IIB) name is referred to in Inspera Assessment, and therefore is visible to all users from January 2024, irrespective of whether IIB version 1.16.0 is used. In these cases we ensure that the new name is followed by a caption reading ‘(previously Inspera Exam Browser)’, to provide further clarification to our users. For example, Planners will see the following caption when setting up a Test in the ‘Deliver’ section.

Version 1.16.0 also includes a new Inspera app logo:

Note: For the moment we have not made any changes to the ‘/get-IEP’ URL used to download the application. This means that ‘/get-IEP’ will still be used to download IIB version 1.16.0. We expect to provide an equivalent  ‘/get-IIB’ URL in the February or March release.

For further information see our explainer guide on the Help Centre. To try out IIB 1.16.0 when it is released, contact the Service Desk.

 

Inspera Originality 🕵️

We are excited to announce that Inspera Originality can now be part of the assessment process! Institutions can integrate originality checking into their existing workflow and maximize educational outcomes. Please be aware that Inspera Originality is an additional feature and is subject to its own licensing fees. For more information please contact your account manager. 

Originality Report is now available! 

Institutions that opt-in for originality checking on their assessments can access the Originality Report. The Originality Report contains information related to the originality detected within the submitted document. 

The Originality Report includes an Originality Index which is derived from the originality checking process that categorizes the document's originality level. The Originality Index is made of the following:

  • Similarity analyses: Explore traditional similarity checking, which assesses submitted documents against sources in both original and translated languages. Utilizing a percentage system, this method quantifies flagged similarity issues, providing a comprehensive evaluation of the document's alignment with existing sources.
  • AI Authorship: AI authorship is implemented within the originality checking process to determine whether there are possible instances of AI-written text within the document. It contains insights on a sentence level such as how many sentences are possibly AI-written.

 

Originality index

As a part of the Originality Report, users will be able to access the following features: 

  • See submission details: View details on all submissions as part of the Originality Report. 
  • Download: Download the entirety of the Originality Report for offline access in one of the following options: Original language report, Translated language report, and as a document file. 
  • Originality Levels: The originality index contains three originality levels: High risk, Medium risk, and Low risk. These levels provide information on the levels of detected originality within the text. 
  • Report Navigator: To ensure that users can easily navigate through multi-paged documents they can use the report navigator. 
  • Sentence information: Users can access detailed sentence information when clicking on them. Moreover, they can: change sources, remove and re-include sources, and exclude sentences entirely. 
  • Sentence highlights: After the originality checking is finished, users can access the highlighted sentences and gain more information about the found similarities. Sentences are highlighted in blue, green, red, and gray, and accompanied by names: exact match, possibly altered text, marked as quotation, and removed highlights. This classification is used to communicate the types of similarities that have been identified.  

Note: Additionally, we are actively exploring the option of utilizing the institution's default language in cases where it is not specified in the test settings. The establishment of this default language will take part in the onboarding process.

Accessibility enhancements 🔍🖱️🔊

Navigation Menu updates: 

We've made a few upgrades to the current navigation menu to enhance accessibility for users who rely on keyboards and screen readers, aiming to improve their overall experience.

Navigation menu correctly renders for 200% Zoom

 

Main navigation does not follow the Disclosure Navigation pattern. Expandable/collapsible elements are not indicated as such and cannot be collapsed. When tabbing, elements expand automatically and keyboard users must traverse all options rather than top level ones.

Enhanced Page Title for Increased Precision 

The refined page title offers heightened accuracy and clarity, ensuring a more precise representation of the content and purpose of the page for users and search engines alike.

Pagination menu 

  • Pagination controls marked as interactive.
  • Clear labels added to First/Last Page controls.
  • Improved indication for the selected item.

Contributors menu 

In this update, the contributors' menu within the expandable/collapsible element has been appropriately marked up.

 

Artiklar i detta avsnitt

Var denna artikel till hjälp?
0 av 0 tyckte detta var till hjälp