{"id":1029624,"date":"2024-05-02T09:50:43","date_gmt":"2024-05-02T16:50:43","guid":{"rendered":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/?p=1029624"},"modified":"2024-05-03T07:30:51","modified_gmt":"2024-05-03T14:30:51","slug":"research-focus-week-of-april-29-2024","status":"publish","type":"post","link":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/blog\/research-focus-week-of-april-29-2024\/","title":{"rendered":"Research Focus: Week of April 29, 2024"},"content":{"rendered":"\n<figure class=\"wp-block-pullquote\"><blockquote><p><em class=\"\">Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code\/datasets, new hires and other milestones from across the research community at Microsoft.<\/em><\/p><\/blockquote><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1400\" height=\"788\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1.png\" alt=\"Research Focus: Week of April 29, 2024\" class=\"wp-image-1029753\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1.png 1400w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-300x169.png 300w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-1024x576.png 1024w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-768x432.png 768w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-1066x600.png 1066w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-655x368.png 655w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-240x135.png 240w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-640x360.png 640w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-960x540.png 960w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-1280x720.png 1280w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"can-large-language-models-transform-natural-language-intent-into-formal-method-postconditions\">Can Large Language Models Transform Natural Language Intent into Formal Method Postconditions?<\/h2>\n\n\n\n<p>Informal natural language that describes code functionality, such as code comments or function documentation, may contain substantial information about a program\u2019s intent. However, there is no guarantee that a program\u2019s implementation aligns with its natural language documentation. In the case of a conflict, leveraging information in code-adjacent natural language has the potential to enhance fault localization, debugging, and code trustworthiness. However, this information is often underutilized, due to the inherent ambiguity of natural language which makes natural language intent challenging to check programmatically. The \u201cemergent abilities\u201d of large language models (LLMs) have the potential to facilitate the translation of natural language intent to programmatically checkable assertions. However, due to a lack of benchmarks and evaluation metrics, it is unclear if LLMs can correctly translate informal natural language specifications into formal specifications that match programmer intent\u2014and whether such translation could be useful in practice.<\/p>\n\n\n\n<p>In a new paper: <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/formalizing-natural-language-intent-into-program-specifications-via-large-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">Can Large Language Models Transform Natural Language Intent into Formal Method Postconditions?<\/a>, researchers from Microsoft describe nl2postcond, the problem leveraging LLMs for transforming informal natural language to formal method postconditions, expressed as program assertions. The paper, to be presented at the upcoming <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/2024.esec-fse.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">ACM International Conference on the Foundations of Software Engineering<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><strong><em>, <\/em><\/strong>introduces and validates metrics to measure and compare different nl2postcond approaches, using the correctness and discriminative power of generated postconditions. The researchers show that nl2postcond via LLMs has the potential to be helpful in practice by demonstrating that LLM-generated specifications can be used to discover historical bugs in real-world projects.\u00a0<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--1\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/formalizing-natural-language-intent-into-program-specifications-via-large-language-models\/\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"semantically-aligned-question-and-code-generation-for-automated-insight-generation\">Semantically Aligned Question and Code Generation for Automated Insight Generation<\/h2>\n\n\n\n<p>People who work with data, like engineers, analysts, and data scientists, often must manually look through data to find valuable insights or write complex scripts to automate exploration of the data. Automated insight generation provides these workers the opportunity to immediately glean insights about their data and identify valuable starting places for writing their exploration scripts. Unfortunately, automated insights produced by LLMs can sometimes generate code that does not correctly correspond (or align) to the insight. In a recent paper: <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/semantically-aligned-question-and-code-generation\/\" target=\"_blank\" rel=\"noreferrer noopener\">Semantically Aligned Question and Code Generation for Automated Insight Generation<\/a>, researchers from Microsoft leverage the semantic knowledge of LLMs to generate targeted and insightful questions about data and the corresponding code to answer those questions. Through an empirical study on data from <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/2305.07288\" target=\"_blank\" rel=\"noopener noreferrer\">Open-WikiTable<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, they then show that embeddings can be effectively used for filtering out semantically unaligned pairs of question and code. The research also shows that generating questions and code together yields more interesting and diverse insights about data.\u00a0<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--2\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/semantically-aligned-question-and-code-generation\/\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-dots\"\/>\n\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"explaining-clip-s-performance-disparities-on-data-from-blind-low-vision-users\">Explaining CLIP&#8217;s performance disparities on data from blind\/low vision users<\/h2>\n\n\n\n<p>AI-based applications hold the potential to assist people who are blind or low vision (BLV) with everyday visual tasks. However, human assistance is often required, due to the wide variety of assistance needed and varying quality of&nbsp;images available. Recent advances in large multi-modal models (LMMs) could potentially address these challenges, enabling a new era of automated visual assistance. Yet, little work has been done to evaluate how well LMMs perform on data from BLV users.<\/p>\n\n\n\n<p>In a recent paper: <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/explaining-clips-performance-disparities-on-data-from-blind-low-vision-users\/\" target=\"_blank\" rel=\"noreferrer noopener\">Explaining CLIP&#8217;s performance disparities on data from blind\/low vision users<\/a>, researchers from Microsoft and the World Bank address this issue by assessing <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/openai.com\/research\/clip\" target=\"_blank\" rel=\"noopener noreferrer\">CLIP<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a widely-used LMM with potential to underpin many assistive technologies. Testing 25 CLIP variants in a zero-shot classification task, their results show that disability objects, like guide canes and Braille displays, are recognized significantly less accurately than common objects, like TV remote controls and coffee mugs\u2014in some cases by up to 28 percentage points difference.\u00a0<\/p>\n\n\n\n<p>The researchers perform an analysis of the captions in three large-scale datasets that are commonly used to train models like CLIP and show that BLV-related content (such as guide canes) is rarely mentioned. This is a potential reason for the large performance gaps. The researchers show that a few-shot learning approach with as little as five example images of a disability object can improve its ability to recognize that object, holding the potential to mitigate CLIP\u2019s performance disparities for BLV users. They then discuss other possible mitigations.\u00a0<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--3\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/explaining-clips-performance-disparities-on-data-from-blind-low-vision-users\/\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n\t<div class=\"border-bottom border-top border-gray-300 mt-5 mb-5 msr-promo text-center text-md-left alignwide\" data-bi-aN=\"promo\" data-bi-id=\"1144027\">\n\t\t\n\n\t\t<p class=\"msr-promo__label text-gray-800 text-center text-uppercase\">\n\t\t<span class=\"px-4 bg-white display-inline-block font-weight-semibold small\">PODCAST SERIES<\/span>\n\t<\/p>\n\t\n\t<div class=\"row pt-3 pb-4 align-items-center\">\n\t\t\t\t\t\t<div class=\"msr-promo__media col-12 col-md-5\">\n\t\t\t\t<a class=\"bg-gray-300 display-block\" href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/story\/ai-testing-and-evaluation-learnings-from-science-and-industry\/\" aria-label=\"AI Testing and Evaluation: Learnings from Science and Industry\" data-bi-cN=\"AI Testing and Evaluation: Learnings from Science and Industry\" target=\"_blank\">\n\t\t\t\t\t<img decoding=\"async\" class=\"w-100 display-block\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2025\/06\/EP2-AI-TE_Hero_Feature_River_No_Text_1400x788.jpg\" alt=\"Illustrated headshots of Daniel Carpenter, Timo Minssen, Chad Atalla, and Kathleen Sullivan for the Microsoft Research Podcast\" \/>\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t<div class=\"msr-promo__content p-3 px-5 col-12 col-md\">\n\n\t\t\t\t\t\t\t\t\t<h2 class=\"h4\">AI Testing and Evaluation: Learnings from Science and Industry<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<p id=\"ai-testing-and-evaluation-learnings-from-science-and-industry\" class=\"large\">Discover how Microsoft is learning from other domains to advance evaluation and testing as a pillar of AI governance.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t<div class=\"wp-block-buttons justify-content-center justify-content-md-start\">\n\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t\t<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/story\/ai-testing-and-evaluation-learnings-from-science-and-industry\/\" aria-describedby=\"ai-testing-and-evaluation-learnings-from-science-and-industry\" class=\"btn btn-brand glyph-append glyph-append-chevron-right\" data-bi-cN=\"AI Testing and Evaluation: Learnings from Science and Industry\" target=\"_blank\">\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div><!--\/.msr-promo__content-->\n\t<\/div><!--\/.msr-promo__inner-wrap-->\n\t<\/div><!--\/.msr-promo-->\n\t\n\n\n<h3 class=\"wp-block-heading h6 has-blue-color has-text-color has-link-color wp-elements-a584a2137da4151ecbde93fba771f798\" id=\"new-research\">NEW RESEARCH<\/h3>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"closed-form-bounds-for-dp-sgd-against-record-level-inference\">Closed-Form Bounds for DP-SGD against Record-level Inference&nbsp;<\/h2>\n\n\n\n<p>Privacy of training data is a central consideration when deploying machine learning (ML) models. Models trained with guarantees of differential privacy (DP) provably resist a wide range of attacks. Although it is possible to derive bounds, or safe limits, for specific privacy threats solely from DP guarantees, meaningful bounds require impractically small privacy budgets, which results in a large loss in utility.<br>\u00a0<br>In a recent paper: <a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/closed-form-bounds-for-dp-sgd-against-record-level-inference\/\" target=\"_blank\" rel=\"noreferrer noopener\">Closed-Form Bounds for DP-SGD against Record-level Inference<\/a>, researchers from Microsoft present a new approach to quantify the privacy of ML models against <strong>membership inference<\/strong> (inferring whether a data record is in the training data) and <strong>attribute inference<\/strong> (reconstructing partial information about a record) without the indirection through DP. They focus on the popular DP-SGD algorithm, which they model as an information theoretic channel whose inputs are the secrets that an attacker wants to infer (e.g., membership of a data record) and whose outputs are the intermediate model parameters produced by iterative optimization. They obtain closed-form bounds for membership inference that match state-of-the-art techniques but are orders of magnitude faster to compute. They also present the first algorithm to produce data-dependent bounds against attribute inference. Compared to bounds computed indirectly through numerical DP budget accountants, these bounds provide a tighter characterization of the privacy risk of deploying an ML model trained on a specific dataset. This research provides a direct, interpretable, and practical way to evaluate the privacy of trained models against inference threats without sacrificing utility.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button is-style-outline is-style-outline--4\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/publication\/closed-form-bounds-for-dp-sgd-against-record-level-inference\/\">Read the paper<\/a><\/div>\n\n\n\n<div class=\"wp-block-button is-style-fill-github\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/github.com\/microsoft\/dpsgd-calculator\/\" target=\"_blank\" rel=\"noreferrer noopener\">Get the code<\/a><\/div>\n<\/div>\n\n\n\n<div style=\"padding-bottom:64px; padding-top:64px\" class=\"wp-block-msr-immersive-section alignfull row has-background has-lighter-gray-background-color has-text-color has-black-color wp-block-msr-immersive-section\">\n\t\n\t<div class=\"container\">\n\t\t<div class=\"wp-block-msr-immersive-section__inner\">\n\t\t\t\t\t<\/div>\n\t<\/div>\n\n\t<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In this edition: Can LLMs transform natural language into formal method postconditions; Semantically aligned question + code generation for automated insight generation; Explaining CLIP performance disparities on blind\/low vision data; plus recent news.<\/p>\n","protected":false},"author":42735,"featured_media":1029753,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[{"type":"user_nicename","value":"Sarah Fakhoury","user_id":"42180"},{"type":"user_nicename","value":"Saikat Chakraborty","user_id":"42411"},{"type":"user_nicename","value":"Shuvendu Lahiri","user_id":"33640"},{"type":"user_nicename","value":"Anirudh Khatry","user_id":"42795"},{"type":"user_nicename","value":"Sumit Gulwani","user_id":"33755"},{"type":"user_nicename","value":"Vu Le","user_id":"39174"},{"type":"user_nicename","value":"Chris Parnin","user_id":"41985"},{"type":"user_nicename","value":"Mukul Singh","user_id":"42048"},{"type":"user_nicename","value":"Gust Verbruggen","user_id":"41605"},{"type":"user_nicename","value":"Daniela Massiceti","user_id":"40408"},{"type":"user_nicename","value":"Camilla Longden","user_id":"36311"},{"type":"user_nicename","value":"Agnieszka Slowik","user_id":"42534"},{"type":"user_nicename","value":"Martin Grayson","user_id":"32893"},{"type":"user_nicename","value":"Cecily Morrison","user_id":"31356"},{"type":"user_nicename","value":"Giovanni Cherubin","user_id":"41410"},{"type":"user_nicename","value":"Andrew Paverd","user_id":"37902"},{"type":"user_nicename","value":"Boris K&ouml;pf","user_id":"37857"},{"type":"user_nicename","value":"Shruti Tople","user_id":"39003"},{"type":"user_nicename","value":"Lukas Wutschitz","user_id":"38775"},{"type":"user_nicename","value":"Santiago Zanella-B\u00e9guelin","user_id":"33518"}],"msr_hide_image_in_river":0,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556,13562,13554,13560,13558],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1029624","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-human-computer-interaction","msr-research-area-programming-languages-software-engineering","msr-research-area-security-privacy-cryptography","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199561,199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144812,559983,663303,793670,998211,1142579],"related-projects":[890049,830104,648207],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Sarah Fakhoury","user_id":42180,"display_name":"Sarah Fakhoury","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/sfakhoury\/\" aria-label=\"Visit the profile page for Sarah Fakhoury\">Sarah Fakhoury<\/a>","is_active":false,"last_first":"Fakhoury, Sarah","people_section":0,"alias":"sfakhoury"},{"type":"user_nicename","value":"Saikat Chakraborty","user_id":42411,"display_name":"Saikat Chakraborty","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/saikatc\/\" aria-label=\"Visit the profile page for Saikat Chakraborty\">Saikat Chakraborty<\/a>","is_active":false,"last_first":"Chakraborty, Saikat","people_section":0,"alias":"saikatc"},{"type":"user_nicename","value":"Shuvendu Lahiri","user_id":33640,"display_name":"Shuvendu Lahiri","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/shuvendu\/\" aria-label=\"Visit the profile page for Shuvendu Lahiri\">Shuvendu Lahiri<\/a>","is_active":false,"last_first":"Lahiri, Shuvendu","people_section":0,"alias":"shuvendu"},{"type":"user_nicename","value":"Sumit Gulwani","user_id":33755,"display_name":"Sumit Gulwani","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/sumitg\/\" aria-label=\"Visit the profile page for Sumit Gulwani\">Sumit Gulwani<\/a>","is_active":false,"last_first":"Gulwani, Sumit","people_section":0,"alias":"sumitg"},{"type":"user_nicename","value":"Vu Le","user_id":39174,"display_name":"Vu Le","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/levu\/\" aria-label=\"Visit the profile page for Vu Le\">Vu Le<\/a>","is_active":false,"last_first":"Le, Vu","people_section":0,"alias":"levu"},{"type":"user_nicename","value":"Chris Parnin","user_id":41985,"display_name":"Chris Parnin","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/chrisparnin\/\" aria-label=\"Visit the profile page for Chris Parnin\">Chris Parnin<\/a>","is_active":false,"last_first":"Parnin, Chris","people_section":0,"alias":"chrisparnin"},{"type":"user_nicename","value":"Mukul Singh","user_id":42048,"display_name":"Mukul Singh","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/singhmukul\/\" aria-label=\"Visit the profile page for Mukul Singh\">Mukul Singh<\/a>","is_active":false,"last_first":"Singh, Mukul","people_section":0,"alias":"singhmukul"},{"type":"user_nicename","value":"Gust Verbruggen","user_id":41605,"display_name":"Gust Verbruggen","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/gverbruggen\/\" aria-label=\"Visit the profile page for Gust Verbruggen\">Gust Verbruggen<\/a>","is_active":false,"last_first":"Verbruggen, Gust","people_section":0,"alias":"gverbruggen"},{"type":"user_nicename","value":"Camilla Longden","user_id":36311,"display_name":"Camilla Longden","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/calongde\/\" aria-label=\"Visit the profile page for Camilla Longden\">Camilla Longden<\/a>","is_active":false,"last_first":"Longden, Camilla","people_section":0,"alias":"calongde"},{"type":"user_nicename","value":"Martin Grayson","user_id":32893,"display_name":"Martin Grayson","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/mgrayson\/\" aria-label=\"Visit the profile page for Martin Grayson\">Martin Grayson<\/a>","is_active":false,"last_first":"Grayson, Martin","people_section":0,"alias":"mgrayson"},{"type":"user_nicename","value":"Cecily Morrison","user_id":31356,"display_name":"Cecily Morrison","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/cecilym\/\" aria-label=\"Visit the profile page for Cecily Morrison\">Cecily Morrison<\/a>","is_active":false,"last_first":"Morrison, Cecily","people_section":0,"alias":"cecilym"},{"type":"user_nicename","value":"Giovanni Cherubin","user_id":41410,"display_name":"Giovanni Cherubin","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/gcherubin\/\" aria-label=\"Visit the profile page for Giovanni Cherubin\">Giovanni Cherubin<\/a>","is_active":false,"last_first":"Cherubin, Giovanni","people_section":0,"alias":"gcherubin"},{"type":"user_nicename","value":"Andrew Paverd","user_id":37902,"display_name":"Andrew Paverd","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/anpaverd\/\" aria-label=\"Visit the profile page for Andrew Paverd\">Andrew Paverd<\/a>","is_active":false,"last_first":"Paverd, Andrew","people_section":0,"alias":"anpaverd"},{"type":"user_nicename","value":"Boris K&ouml;pf","user_id":37857,"display_name":"Boris K&ouml;pf","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/bokoepf\/\" aria-label=\"Visit the profile page for Boris K&ouml;pf\">Boris K&ouml;pf<\/a>","is_active":false,"last_first":"K\u00f6pf, Boris","people_section":0,"alias":"bokoepf"},{"type":"user_nicename","value":"Shruti Tople","user_id":39003,"display_name":"Shruti Tople","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/shtople\/\" aria-label=\"Visit the profile page for Shruti Tople\">Shruti Tople<\/a>","is_active":false,"last_first":"Tople, Shruti","people_section":0,"alias":"shtople"},{"type":"user_nicename","value":"Lukas Wutschitz","user_id":38775,"display_name":"Lukas Wutschitz","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/luwutsch\/\" aria-label=\"Visit the profile page for Lukas Wutschitz\">Lukas Wutschitz<\/a>","is_active":false,"last_first":"Wutschitz, Lukas","people_section":0,"alias":"luwutsch"},{"type":"user_nicename","value":"Santiago Zanella-B\u00e9guelin","user_id":33518,"display_name":"Santiago Zanella-B\u00e9guelin","author_link":"<a href=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/people\/santiago\/\" aria-label=\"Visit the profile page for Santiago Zanella-B\u00e9guelin\">Santiago Zanella-B\u00e9guelin<\/a>","is_active":false,"last_first":"Zanella-B\u00e9guelin, Santiago","people_section":0,"alias":"santiago"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-960x540.png\" class=\"img-object-cover\" alt=\"Research Focus: Week of April 29, 2024\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-960x540.png 960w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-300x169.png 300w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-1024x576.png 1024w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-768x432.png 768w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-1066x600.png 1066w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-655x368.png 655w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-240x135.png 240w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-640x360.png 640w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1-1280x720.png 1280w, https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-content\/uploads\/2024\/04\/RF40-BlogHeroFeature-1400x788-1.png 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"","formattedDate":"May 2, 2024","formattedExcerpt":"In this edition: Can LLMs transform natural language into formal method postconditions; Semantically aligned question + code generation for automated insight generation; Explaining CLIP performance disparities on blind\/low vision data; plus recent news.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/1029624","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1029624"}],"version-history":[{"count":17,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/1029624\/revisions"}],"predecessor-version":[{"id":1030812,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/posts\/1029624\/revisions\/1030812"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media\/1029753"}],"wp:attachment":[{"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1029624"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1029624"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1029624"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1029624"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1029624"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1029624"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1029624"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1029624"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1029624"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1029624"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/new-cm-edgedigital.pages.dev\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1029624"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}