Skip to yearly menu bar Skip to main content


Poster
in
Workshop: I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models

Structure-Aware Path Inference for Neural Finite State Transducers

Weiting Tan · Chu-Cheng Lin · Jason Eisner


Abstract:

Finite-state transducers (FSTs) are a traditional approach to string-to-string mapping. Each FST path specifies a possible alignment of input and output strings. Compared to an unstructured seq2seq model, the FST includes an explicit latent alignment variable and equips it with domain-specific hard constraints and featurization, which can improve generalization from small training sets.Previous work has shown how to score the FST paths with a trainable neural architecture; this improves the model's expressive power by dropping the usual Markov assumption but makes inference more difficult for the same reason. In this paper, we focus on the resulting challenge of imputing the latent alignment path that explains a given pair of input and output strings (e.g. during training). We train three autoregressive approximate models for amortized inference of the path, which can then be used as proposal distributions for importance sampling. All three models perform lookahead. Our most sophisticated (and novel) model leverages the FST structure to consider the graph of future paths; unfortunately, we find that it loses out to the simpler approaches---except on an \emph{artificial} task that we concocted to confuse the simpler approaches.

Chat is not available.