All Stories

  1. Crowdsourcing or AI-Sourcing?
  2. Optimizing LLMs with Direct Preferences: A Data Efficiency Perspective
  3. Hate Speech Detection with Generalizable Target-aware Fairness
  4. Understanding the Barriers to Running Longitudinal Studies on Crowdsourcing Platforms
  5. Fairness without Sensitive Attributes via Knowledge Sharing
  6. How Good are LLMs in Generating Personalized Advertisements?
  7. Who Determines What Is Relevant? Humans or AI? Why Not Both?
  8. Editorial: Special Issue on Human in the Loop Data Curation
  9. On the Impact of Showing Evidence from Peers in Crowdsourced Truthfulness Assessments
  10. Data Bias Management
  11. Perspectives on Large Language Models for Relevance Judgment
  12. On the Impact of Data Quality on Image Classification Fairness
  13. How Many Crowd Workers Do I Need? On Statistical Power When Crowdsourcing Relevance Judgments
  14. Human-in-the-loop Regular Expression Extraction for Single Column Format Inconsistency
  15. The Community Notes Observatory: Can Crowdsourced Fact-Checking be Trusted in Practice?
  16. Report on the 1st Workshop on Human-in-the-Loop Data Curation (HIL-DC 2022) at CIKM 2022
  17. A Data-Driven Analysis of Behaviors in Data Curation Processes
  18. Combining Human and Machine Confidence in Truthfulness Assessment
  19. Using Computers to Fact-Check Text and Justify the Decision
  20. Socio-Economic Diversity in Human Annotations
  21. Preferences on a Budget: Prioritizing Document Pairs when Crowdsourcing Relevance Judgments
  22. Does Evidence from Peers Help Crowd Workers in Assessing Truthfulness?
  23. Effects of Technological Interventions for Self-regulation: A Control Experiment in Learnersourcing
  24. Hierarchical Clustering of Corals using Image Clustering
  25. An Analysis of the Australian Political Discourse in Sponsored Social Media Content
  26. On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices
  27. Charting the Design and Analytics Agenda of Learnersourcing Systems
  28. Report on the first workshop on bias in automatic knowledge graph construction at AKBC 2020
  29. Modelling User Behavior Dynamics with Embeddings
  30. The COVID-19 Infodemic
  31. How to make crowd workers earn an hourly wage
  32. On Understanding Data Worker Interaction Behaviors
  33. Can The Crowd Identify Misinformation Objectively?
  34. Representation learning for entity type ranking
  35. Health Card Retrieval for Consumer Health Search
  36. On Transforming Relevance Scales
  37. Understanding Worker Moods and Reactions to Rejection in Crowdsourcing
  38. Quality Control Attack Schemes in Crowdsourcing
  39. Health Cards for Consumer Health Search
  40. Implicit Bias in Crowdsourced Knowledge Graphs
  41. Scalpel-CD: Leveraging Crowdsourcing and Deep Probabilistic Modeling for Debugging Noisy Training Data
  42. Deadline-Aware Fair Scheduling for Multi-Tenant Crowd-Powered Systems
  43. All Those Wasted Hours
  44. Novel insights into views towards H1N1 during the 2009 Pandemic: a thematic analysis of Twitter data
  45. Non-parametric Class Completeness Estimators for Collaborative Knowledge Graphs—The Case of Wikidata
  46. Semantic Interlinking
  47. The Impact of Task Abandonment in Crowdsourcing
  48. The Evolution of Power and Standard Wikidata Editors: Comparing Editing Behavior over Time to Predict Lifespan and Volume of Edits
  49. Can User Behaviour Sequences Reflect Perceived Novelty?
  50. Moral Panic through the Lens of Twitter
  51. Investigating User Perception of Gender Bias in Image Search
  52. On Fine-Grained Relevance Scales
  53. On the Volatility of Commercial Search Engines and its Impact on Information Retrieval Research
  54. Crowd Anatomy Beyond the Good and Bad: Behavioral Traces for Crowd Worker Modeling and Pre-selection
  55. Measuring the Effect of Public Health Campaigns on Twitter: The Case of World Autism Awareness Day
  56. Augmenting Intelligence with Humans-in-the-Loop (HumL@WWW2018) Chairs' Welcome & Organization
  57. Chapter 4: Using Twitter as a Data Source: An Overview of Ethical, Legal, and Methodological Challenges
  58. Understanding Engagement through Search Behaviour
  59. Considering Assessor Agreement in IR Evaluation
  60. Modus Operandi of Crowd Workers
  61. An Introduction to Hybrid Human-Machine Information Systems
  62. Towards building a standard dataset for Arabic keyphrase extraction evaluation
  63. Scheduling Human Intelligence Tasks in Multi-Tenant Crowd-Powered Systems
  64. Contextualized ranking of entity types based on knowledge graphs
  65. A Tutorial on Leveraging Knowledge Graphs for Web Search
  66. The Relationship Between User Perception and User Behaviour in Interactive Information Retrieval Evaluation
  67. Hybrid human–machine information systems: Challenges and opportunities
  68. Pooling-based continuous evaluation of information retrieval systems
  69. Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing
  70. The Dynamics of Micro-Task Crowdsourcing
  71. Understanding Malicious Behavior in Crowdsourcing Platforms
  72. Correct Me If I'm Wrong
  73. B-hist: Entity-centric search over personal web browsing history
  74. Hippocampus
  75. Effective named entity recognition for idiosyncratic web collections
  76. Entity disambiguation in tweets leveraging user social profiles
  77. Large-scale linked data integration using probabilistic reasoning and crowdsourcing
  78. NoizCrowd: A Crowd-Based Data Gathering and Management System for Noise Level Data
  79. Ontology-Based Word Sense Disambiguation for Scientific Literature
  80. TRank: Ranking Entity Types Using the Web of Data
  81. The Bowlogna ontology: Fostering open curricula and agile knowledge bases for Europe's higher education landscape
  82. ZenCrowd
  83. BowlognaBench—Benchmarking RDF Analytics
  84. Combining inverted indices and structured search for ad-hoc object retrieval
  85. Predicting the Future Impact of News Events
  86. From people to entities
  87. Visual interfaces for stimulating exploratory search
  88. Report on INEX 2009
  89. Why finding entities in Wikipedia is difficult, sometimes
  90. Leveraging personal metadata for Desktop search: The Beagle++ system
  91. Dear search engine: what's your opinion about...?
  92. Entity summarization of news articles
  93. Exploiting click-through data for entity retrieval
  94. Overview of the INEX 2009 Entity Ranking Track
  95. Ranking Entities Using Web Search Query Logs
  96. TAER
  97. The missing links
  98. An Architecture for Finding Entities on the Web
  99. Report on INEX 2008
  100. A Vector Space Model for Ranking Entities and Its Application to Expert Search
  101. How to Trace and Revise Identities
  102. L3S at INEX 2008: Retrieving Entities Using Structured Information
  103. Overview of the INEX 2008 Entity Ranking Track
  104. A Model for Ranking Entities and Its Application to Wikipedia
  105. Social recommendations of content and metadata
  106. Leveraging semantic technologies for enterprise search
  107. A Classification of IR Effectiveness Metrics
  108. L3S at INEX 2007: Query Expansion for Entity Ranking Using a Highly Accurate Ontology
  109. Ranking Categories for Web Search
  110. Semantically Enhanced Entity Ranking