Few-shot learning on structured data is likely an essential requirement for deploying AI models in real life. In classical supervised ML setups, we have access to large amounts of labelled samples, which is often not the case in real-world settings—a few examples are biochemical, health, social or weather contexts. Many of these can be represented as graphs, so structure also plays a key role in designing methods that can successfully handle these scenarios. Hence, it is often important to fully exploit the few labels available and enable our models to make use of this information, in order to obtain an equally-good representation to the ones obtained via data-hungry methods. The talk presents two works that address this issue from different angles: Graph Density-Aware Losses for Novel Compositions in Scene Graph Generation (Knyazev et al., 2020) and Message Passing Neural Processes (Cangea & Day et al., 2020).