LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding
Paper
•
2202.13669
•
Published
•
2
Port of the original lilt-only-base model weights from the Language-Independent Layout Transformer (LiLT)
The weights found here are not useful as a standalone and should be instead used in combination with Roberta-like models as outlined HERE
This repository aims to make it easier for others to combine LiLT with a Roberta-like model of their liking. Please refer to the following script on how to fuse XLM-Roberta with LiLT for multi-modal training/fine-tuning HERE