Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 7e2e2f41-997f-499b-b96d-8410d3845c5b)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
                  data_files = DataFilesDict.from_patterns(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
                  else DataFilesList.from_patterns(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
                  origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
                  return thread_map(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
                  return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
                  return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
                  for obj in iterable:
                             ^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
                  yield _result_or_cancel(fs.pop())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
                  return fut.result(timeout)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
                  return self.__get_result()
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
                  raise self._exception
                File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
                  result = self.fn(*self.args, **self.kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
                  resolved_path = fs.resolve_path(data_file)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
                  self._api.repo_info(
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
                  return method(
                         ^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
                  return super().send(request, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 7e2e2f41-997f-499b-b96d-8410d3845c5b)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

VK-LSVD: Large Short-Video Dataset

VK-LSVD is the largest open industrial short-video recommendation dataset with real-world interactions:

  • 40B unique user–item interactions with rich feedback (timespent, like, dislike, share, bookmark, click_on_author, open_comments) and context (place, platform, agent);
  • 10M users (with age, gender, geo);
  • 20M short videos (with duration, author_id, content embedding);
  • Global Temporal Ordering across six consecutive months of user interactions.

Why short video? Users often watch dozens of clips per session, producing dense, time-ordered signals well suited for modeling. Unlike music, podcasts, or long-form video, which are often consumed in the background, short videos are foreground by design. They also do not exhibit repeat exposure. Even without explicit feedback, signals such as skips, completions, and replays yield strong implicit labels. Single-item feeds also simplify attribution and reduce confounding compared with multi-item layouts.


Note: The test set will be released after the upcoming challenge.


📊 Basic Statistics🧱 Data Description⚡ Quick Start🧩 Configurable Subsets


Basic Statistics

  • Users 10,000,000
  • Items 19,627,601
  • Unique interactions 40,774,024,903
  • Interactions density 0.0208%
  • Total watch time: 858,160,100,084 s
  • Likes: 1,171,423,458
  • Dislikes: 11,860,138
  • Shares: 262,734,328
  • Bookmarks: 40,124,463
  • Clicks on author: 84,632,666
  • Comment opens: 481,251,593

Data Description

Privacy-preserving taxonomy — all categorical metadata (user_id, geo, item_id, author_id, place, platform, agent) is anonymized into stable integer IDs (consistent across splits; no reverse mapping provided).

Interactions

interactions
Each row is one observation (a short video shown to a user) with feedback and context. There are no repeated exposures of the same user–item pair.
Global Temporal Split (GTS): train / validation / test preserve time order — train on the past, validate/test on the future.
Chronology: Files are organized by weeks (e.g., week_XX.parquet); rows within each file are in increasing timestamp order.

Field Type Description
user_id uint32 User identifier
item_id uint32 Video identifier
place uint8 Place: feed/search/group/… (24 ids)
platform uint8 Platform: Android/Web/TV/… (11 ids)
agent uint8 Agent/client: browser/app (29 ids)
timespent uint8 Watch time (0–255 seconds)
like boolean User liked the video
dislike boolean User disliked the video
share boolean User shared the video
bookmark boolean User bookmarked the video
click_on_author boolean User opened author page
open_comments boolean User opened the comments section

Users metadata

users_metadata.parquet

Field Type Description
user_id uint32 User identifier
age uint8 Age (18-70 years)
gender uint8 Gender
geo uint8 Most frequent user location (80 ids)
train_interactions_rank uint32 Popularity rank for sampling (lower = more interactions)

Items metadata

items_metadata.parquet

Field Type Description
item_id uint32 Video identifier
author_id uint32 Author identifier
duration uint8 Video duration (seconds)
train_interactions_rank uint32 Popularity rank for sampling (lower = more interactions)

Embeddings: variable width

Embeddings are trained strictly on content (video/description/audio, etc.) — no collaborative signal mixed in.
Components are ordered: the dot product of the first n components approximates the cosine similarity of the original production embeddings.
This lets researchers pick any dimensionality (1…64) to trade quality for speed and memory.

item_embeddings.npz

Field Type Description
item_id uint32 Video identifier
embedding float16[64] Item content embedding with ordered components

Quick Start

Load a small subsample

from huggingface_hub import hf_hub_download
import polars as pl
import numpy as np

subsample_name = 'up0.001_ip0.001'
content_embedding_size = 32

train_interactions_files = [f'subsamples/{subsample_name}/train/week_{i:02}.parquet'
                            for i in range(25)]
val_interactions_file = [f'subsamples/{subsample_name}/validation/week_25.parquet']

metadata_files = ['metadata/users_metadata.parquet',
                  'metadata/items_metadata.parquet',
                  'metadata/item_embeddings.npz']

for file in (train_interactions_files +
             val_interactions_file +
             metadata_files):
    hf_hub_download(
        repo_id='deepvk/VK-LSVD', repo_type='dataset',
        filename=file, local_dir='VK-LSVD'
    )

train_interactions = pl.concat([pl.scan_parquet(f'VK-LSVD/{file}')
                                for file in train_interactions_files])
train_interactions = train_interactions.collect(engine='streaming')

val_interactions = pl.read_parquet(f'VK-LSVD/{val_interactions_file[0]}')

train_users = train_interactions.select('user_id').unique()
train_items = train_interactions.select('item_id').unique()

item_ids = np.load('VK-LSVD/metadata/item_embeddings.npz')['item_id']
item_embeddings = np.load('VK-LSVD/metadata/item_embeddings.npz')['embedding']

mask = np.isin(item_ids, train_items.to_numpy())
item_ids = item_ids[mask]
item_embeddings = item_embeddings[mask]
item_embeddings = item_embeddings[:, :content_embedding_size]

users_metadata = pl.read_parquet('VK-LSVD/metadata/users_metadata.parquet')
items_metadata = pl.read_parquet('VK-LSVD/metadata/items_metadata.parquet')

users_metadata = users_metadata.join(train_users, on='user_id')
items_metadata = items_metadata.join(train_items, on='item_id')
items_metadata = items_metadata.join(pl.DataFrame({'item_id': item_ids, 
                                                   'embedding': item_embeddings}), 
                                     on='item_id')

Configurable Subsets

We provide several ready-made slices and simple utilities to compose your own subset that matches your task, data budget, and hardware. You can control density via popularity quantiles (train_interactions_rank), draw random users, or pick specific time windows — while preserving the Global Temporal Split.

Representative subsamples are provided for quick experiments:

Subset Users Items Interactions Density
whole 10,000,000 19,627,601 40,774,024,903 0.0208%
ur0.1 1,000,000 18,701,510 4,066,457,259 0.0217%
ur0.01 100,000 12,467,302 407,854,360 0.0327%
ur0.01_ir0.01 90,178 125,018 4,044,900 0.0359%
up0.01_ir0.01 100000 171106 38,404,921 0.2245%
ur0.01_ip0.01 99,893 196,277 191,625,941 0.9774%
up0.01_ip0.01 100,000 196,277 1,417,906,344 7.2240%
up0.001_ip0.001 10,000 19,628 47,976,280 24.4428%
up-0.9_ip-0.9 8,939,432 17,654,817 2,861,937,212 0.0018%
  • urX — X fraction of random users (e.g., ur0.01 = 1% of users).
  • ipX — X fraction of popular items (by train_interactions_rank)
  • Negative X denotes the least-popular fraction (e.g., −0.9 → bottom 90%).

For example, to get ur0.01_ip0.01 (1% of random users, 1% of most popular items) use the snippet below.

import polars as pl

def get_sample(entries: pl.DataFrame, split_column: str, fraction: float) -> pl.DataFrame:
    if fraction >= 0:
        entries = entries.filter(pl.col(split_column) <= 
                                 pl.col(split_column).quantile(fraction, 
                                                               interpolation='midpoint'))
    else:
        entries = entries.filter(pl.col(split_column) >= 
                                 pl.col(split_column).quantile(1 + fraction, 
                                                               interpolation='midpoint'))
    return entries

users = pl.scan_parquet('VK-LSVD/metadata/users_metadata.parquet')
users_sample = get_sample(users, 'user_id', 0.01).select(['user_id'])

items = pl.scan_parquet('VK-LSVD/metadata/items_metadata.parquet')
items_sample = get_sample(items, 'train_interactions_rank', 0.01).select(['item_id'])

interactions = pl.scan_parquet('VK-LSVD/interactions/validation/week_25.parquet')
interactions = interactions.join(users_sample, on='user_id', maintain_order='left')
interactions = interactions.join(items_sample, on='item_id', maintain_order='left')

interactions_sample = interactions.collect(engine='streaming')

To get up-0.9_ip-0.9 (90% of least popular users, 90% of least popular items) replace users and items sampling lines with

users_sample = get_sample(users, 'train_interactions_rank', -0.9).select(['user_id'])
items_sample = get_sample(items, 'train_interactions_rank', -0.9).select(['item_id'])
Downloads last month
3,372

Models trained or fine-tuned on deepvk/VK-LSVD