Columbia Team Wins Top 3 in the FG 2021 Families In the Wild Kinship Verification

Computer science students participated in the IEEE International Conference on Automatic Face and Gesture Recognition 2021 (FG 2021) Kinship Verification challenge as part of their Deep Learning (DL) course, taught by adjunct Associate Professor Iddo Drori. They presented their findings at the conference. 

The goal of the kinship verification competition is to determine whether a parent-child, sibling, or grandparent-grandchild relationship exists between two people. It is important in social media applications, forensic investigations, finding missing children, and reuniting families. The team demonstrated high-quality kinship verification by participating in the FG 2021 Recognizing Families in the Wild Challenge which provides the largest publicly available dataset in the field. Their approach, winning third place in the competition, ensembled models written both by humans and written automatically by OpenAI Codex for the first time.

As part of the unique Deep Learning course curriculum, students get to compete in common task framework competitions, which enable them to test the waters in the real-world while advancing science. This semester Drori and teaching assistants Newman Cheng and Vaibhav Bagri performed feasibility tests on the Kinship Verification Challenge and found it in line with the course goals.

Students used the kinship verification dataset to design, develop, and train deep learning models. Over the course of three weeks, teams worked on and improved their submissions competing with groups from across the world. Drori then worked with the leading teams by using OpenAI Codex to improve the verification models even further. The effort paid off with the students at the top three of the competition leaderboard, claiming victory by quickly writing up their findings.

The winning team is composed of graduate students Junyi Huang (Mathematics), Maxwell Strome (Computer Science), Ian Jenkins (Applied Physics and Math), Parker Williams (Computer Science), Bo Feng (Electrical Engineering), Yaning Wang (Electrical Engineering), and Roman Wang (Computer Science).

“Winning third place in this international challenge is an excellent achievement. The teams used both humans and machines to automatically write the code, which is a first and commendable feat!” said Drori. 

Remembering Davide Giri

The computer science community lost Davide Giri on December 2nd. For the past seven years, he worked in the Systems-Level Design Group under the guidance of Professor Luca Carloni. Friends and colleagues share their thoughts and memories of Davide. 







The System-Level Design Group (2019). Front row (left to right) : Luca Piccolboni, Kuan-Lin Chiu, Davide Giri, Jihye Kwon, Maico Cassel. Back Row (left to right) : Paolo Mantovani, Guy Eichler, Luca Carloni, Joseph Zuckerman, Giuseppe Di Guglielmo


CS Researchers Present at NeurIPS 2021

Researchers from the department presented machine learning and artificial intelligence research at the thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021).

Posters

The Causal-Neural Connection: Expressiveness, Learnability, and Inference
Kevin M Xia Columbia University, Kai-Zhan Lee Université de Montréal, Yoshua Bengio Columbia University, Elias Bareinboim Columbia University

One of the central elements of any causal inference is an object called structural causal model (SCM), which represents a collection of mechanisms and exogenous sources of random variation of the system under investigation (Pearl, 2000). An important property of many kinds of neural networks is universal approximability: the ability to approximate any function to arbitrary precision. Given this property, one may be tempted to surmise that a collection of neural nets is capable of learning any SCM by training on data generated by that SCM. In this paper, we show this is not the case by disentangling the notions of expressivity and learnability. Specifically, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020), which describes the limits of what can be learned from data, still holds for neural models. For instance, an arbitrarily complex and expressive neural net is unable to predict the effects of interventions given observational data alone. Given this result, we introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences. Building on this new class of models, we focus on solving two canonical tasks found in the literature known as causal identification and estimation. Leveraging the neural toolbox, we develop an algorithm that is both sufficient and necessary to determine whether a causal effect can be learned from data (i.e., causal identifiability); it then estimates the effect whenever identifiability holds (causal estimation). Simulations corroborate the proposed approach.

Double Machine Learning Density Estimation for Local Treatment Effects with Instruments
Yonghan Jung Purdue University, Jin Tian Iowa State University, Elias Bareinboim Columbia University

It is common to quantify causal effects with mean values, which, however, may fail to capture significant distribution differences of the outcome under different treatments. We study the problem of estimating the density of the causal effect of a binary treatment on a continuous outcome given a binary instrumental variable in the presence of covariates. Specifically, we consider the local treatment effect, which measures the effect of treatment among those who comply with the assignment under the assumption of monotonicity (only the ones who were offered the treatment take it). We develop two families of methods for this task, kernel-smoothing and model-based approximations — the former smoothes the density by convoluting with a smooth kernel function; the latter projects the density onto a finite-dimensional density class. For both approaches, we derive double/debiased machine learning (DML) based estimators. We study the asymptotic convergence rates of the estimators and show that they are robust to the biases in nuisance function estimation. We illustrate the proposed methods on synthetic data and a real dataset called 401(k).

Sequential Causal Imitation Learning with Unobserved Confounders
Daniel Kumor Purdue University, Junzhe Zhang Columbia University, Elias Bareinboim Columbia University

“Monkey see monkey do” is an age-old adage, referring to naive imitation without a deep understanding of a system’s underlying mechanics. Indeed, if a demonstrator has access to information unavailable to the imitator (monkey), such as a different set of sensors, then no matter how perfectly the imitator models its perceived environment (See), attempting to directly reproduce the demonstrator’s behavior (Do) can lead to poor outcomes. Imitation learning in the presence of a mismatch between demonstrator and imitator has been studied in the literature under the rubric of causal imitation learning (Zhang et. al. 2020), but existing solutions are limited to single-stage decision-making. This paper investigates the problem of causal imitation learning in sequential settings, where the imitator must make multiple decisions per episode. We develop a graphical criterion that is both necessary and sufficient for determining the feasibility of causal imitation, providing conditions when an imitator can match a demonstrator’s performance despite differing capabilities. Finally, we provide an efficient algorithm for determining imitability and corroborate our theory with simulations.

Nested Counterfactual Identification from Arbitrary Surrogate Experiments
Juan Correa Columbia University, Sanghack Lee Columbia University, Elias Bareinboim Columbia University

The Ladder of Causation describes three qualitatively different types of activities an agent may be interested in engaging in, namely, seeing (observational), doing (interventional), and imagining (counterfactual) (Pearl and Mackenzie, 2018). The inferential challenge imposed by the causal hierarchy is that data is collected by an agent observing or intervening in a system (layers 1 and 2), while its goal may be to understand what would have happened had it taken a different course of action, contrary to what factually ended up happening (layer 3). While there exists a solid understanding of the conditions under which cross-layer inferences are allowed from observations to interventions, the results are somewhat scarcer when targeting counterfactual quantities. In this paper, we study the identification of nested counterfactuals from an arbitrary combination of observations and experiments. Specifically, building on a more explicit definition of nested counterfactuals, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrarily nested counterfactuals to unnested ones. For instance, applications in mediation and fairness analysis usually evoke notions of direct, indirect, and spurious effects, which naturally require nesting. Second, we introduce a sufficient and necessary graphical condition for counterfactual identification from an arbitrary combination of observational and experimental distributions. Lastly, we develop an efficient and complete algorithm for identifying nested counterfactuals; failure of the algorithm returning an expression for a query implies it is not identifiable.

Causal Identification with Matrix Equations
Sanghack Lee Columbia University, Elias Bareinboim Columbia University

Causal effect identification is concerned with determining whether a causal effect is computable from a combination of qualitative assumptions about the underlying system (e.g., a causal graph) and distributions collected from this system. Many identification algorithms exclusively rely on graphical criteria made of a non-trivial combination of probability axioms, do-calculus, and refined c-factorization (e.g., Lee & Bareinboim, 2020). In a sequence of increasingly sophisticated results, it has been shown how proxy variables can be used to identify certain effects that would not be otherwise recoverable in challenging scenarios through solving matrix equations (e.g., Kuroki & Pearl, 2014; Miao et al., 2018). In this paper, we develop a new causal identification algorithm that utilizes both graphical criteria and matrix equations. Specifically, we first characterize the relationships between certain graphically-driven formulae and matrix multiplications. With such characterizations, we broaden the spectrum of proxy variable-based identification conditions and further propose novel intermediary criteria based on the pseudoinverse of a matrix. Finally, we devise a causal effect identification algorithm, which accepts as input a collection of marginal, conditional, and interventional distributions, integrating enriched matrix-based criteria into a graphical identification approach.

Bayesian decision-making under misspecified priors with applications to meta-learning
Max Simchowitz Massachusetts Institute of Technology, Christopher Tosh Columbia University, Akshay Krishnamurthy Microsoft Research NYC, Daniel Hsu Columbia University, Thodoris Lykouris Massachusetts Institute of Technology, Miro Dudik Microsoft Research NYC, Robert E Schapire Microsoft Research NYC

Thompson sampling and other Bayesian sequential decision-making algorithms are among the most popular approaches to tackle explore/exploit trade-offs in (contextual) bandits. The choice of prior in these algorithms offers flexibility to encode domain knowledge but can also lead to poor performance when misspecified. In this paper, we demonstrate that performance degrades gracefully with misspecification. We prove that the expected reward accrued by Thompson sampling (TS) with a misspecified prior differs by at most ~O(H2ϵ)O~(H2ϵ) from TS with a well-specified prior, where ϵ is the total-variation distance between priors and H is the learning horizon. Our bound does not require the prior to have any parametric form. For priors with bounded support, our bound is independent of the cardinality or structure of the action space, and we show that it is tight up to universal constants in the worst case. Building on our sensitivity analysis, we establish generic PAC guarantees for algorithms in the recently studied Bayesian meta-learning setting and derive corollaries for various families of priors. Our results generalize along two axes: (1) they apply to a broader family of Bayesian decision-making algorithms, including a Monte-Carlo implementation of the knowledge gradient algorithm (KG), and (2) they apply to Bayesian POMDPs, the most general Bayesian decision-making setting, encompassing contextual bandits as a special case. Through numerical simulations, we illustrate how prior misspecification and the deployment of one-step look-ahead (KG) can impact the convergence of meta-learning in multi-armed and contextual bandits with structured and correlated priors.

Support vector machines and linear regression coincide with very high-dimensional features
Navid Ardeshir Columbia University, Clayton Sanford Columbia University, Daniel Hsu Columbia University

The support vector machine (SVM) and minimum Euclidean norm least squares regression are two fundamentally different approaches to fitting linear models, but they have recently been connected in models for very high-dimensional data through a phenomenon of support vector proliferation, where every training example used to fit an SVM becomes a support vector. In this paper, we explore the generality of this phenomenon and make the following contributions. First, we prove a super-linear lower bound on the dimension (in terms of sample size) required for support vector proliferation in independent feature models, matching the upper bounds from previous works. We further identify a sharp phase transition in Gaussian feature models, bound the width of this transition, and give experimental support for its universality. Finally, we hypothesize that this phase transition occurs only in much higher-dimensional settings in the ℓ1ℓ1 variant of the SVM, and we present a new geometric characterization of the problem that may elucidate this phenomenon for the general ℓpℓp case.

Leveraging SE(3) Equivariance for Self-supervised Category-Level Object Pose Estimation from Point Clouds
Xiaolong Li Virginia Tech, Yijia Weng Peking University, Li Yi Tsinghua University, Leonidas Guibas Stanford University, A. Abbott Virginia Tech, Shuran Song Columbia University, He Wang Peking University

Category-level object pose estimation aims to find 6D object poses of previously unseen object instances from known categories without access to object CAD models. To reduce the huge amount of pose annotations needed for category-level learning, we propose for the first time a self-supervised learning framework to estimate category-level 6D object pose from single 3D point clouds. During training, our method assumes no ground-truth pose annotations, no CAD models, and no multi-view supervision. The key to our method is to disentangle shape and pose through an invariant shape reconstruction module and an equivariant pose estimation module, empowered by SE(3) equivariant point cloud networks. The invariant shape reconstruction module learns to perform aligned reconstructions, yielding a category-level reference frame without using any annotations. In addition, the equivariant pose estimation module achieves category-level pose estimation accuracy that is comparable to some fully supervised methods. Extensive experiments demonstrate the effectiveness of our approach on both complete and partial depth point clouds from the ModelNet40 benchmark, and on real depth point clouds from the NOCS-REAL 275 dataset. The project page with code and visualizations can be found at: dragonlong.github.io/equi-pose.

Posterior Collapse and Latent Variable Non-identifiability
Yixin Wang University of Michigan, David Blei Columbia University, John Cunningham Columbia University

Variational autoencoders model high-dimensional data by posting low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Acrosssynthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.

VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio, and Text
Hassan Akbari Columbia University, Liangzhe Yuan Google, Rui Qian Cornell University, Wei-Hong Chuang Google, Shih-Fu Chang Columbia University, Yin Cui Google, Boqing Gong Google

We present a framework for learning multimodal representations from unlabeled data using convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks. We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval. Furthermore, we study a modality-agnostic single-backbone Transformer by sharing weights among the three modalities. We show that the convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks. Especially, VATT’s vision Transformer achieves the top-1 accuracy of 82.1% on Kinetics-400, 83.6% on Kinetics-600, 72.7% on Kinetics-700, and 41.1% on Moments in Time, new records while avoiding supervised pre-training. Transferring to image classification leads to 78.7% top-1 accuracy on ImageNet compared to 64.7% by training the same Transformer from scratch, showing the generalizability of our model despite the domain gap between videos and images. VATT’s audio Transformer also sets a new record on waveform-based audio event recognition by achieving the mAP of 39.4% on AudioSet without any supervised pre-training.

Identifying and Benchmarking Natural Out-of-Context Prediction Problems
David Madras University of Toronto, Richard Zemel Columbia University

Deep learning systems frequently fail at out-of-context (OOC) prediction, the problem of making reliable predictions on uncommon or unusual inputs or subgroups of the training distribution. To this end, a number of benchmarks for measuring OOC performance have been recently introduced. In this work, we introduce a framework unifying the literature on OOC performance measurement, and demonstrate how rich auxiliary information can be leveraged to identify candidate sets of OOC examples in existing datasets. We present NOOCh: a suite of naturally-occurring “challenge sets”, and show how varying notions of context can be used to probe specific OOC failure modes. Experimentally, we explore the tradeoffs between various learning approaches on these challenge sets and demonstrate how the choices made in designing OOC benchmarks can yield varying conclusions.

Variational Model Inversion Attacks
Kuan-Chieh Wang University of Toronto, YAN FU University of Toronto, Ke Li Simon Fraser University, Ashish Khisti University of Toronto, Richard Zemel Columbia University, Alireza Makhzani University of Toronto

Given the ubiquity of deep neural networks, it is important that these models do not reveal information about sensitive data that they have been trained on. In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy. In order to optimize this variational objective, we choose a variational family defined in the code space of a deep generative model, trained on a public auxiliary dataset that shares some structural similarity with the target dataset. Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.

Workshop

Causal Inference & Machine Learning: Why now?
Elias Bareinboim Columbia University, Bernhard Schölkopf Columbia University, Terrence Sejnowski Salk Institute, Yoshua Bengio University of Montreal, Judea Pearl University of California Los Angeles

Machine Learning has been extremely successful throughout many critical areas, including computer vision, natural language processing, and game-playing. Still, a growing segment of the machine learning community recognizes that there are still fundamental pieces missing from the AI puzzle, among them causal inference.

This recognition comes from the observation that even though causality is a central component found throughout the sciences, engineering, and many other aspects of human cognition, explicit reference to causal relationships is largely missing in current learning systems. This entails a new goal of integrating causal inference and machine learning capabilities into the next generation of intelligent systems, thus paving the way towards higher levels of intelligence and human-centric AI. The synergy goes in both directions; causal inference benefitting from machine learning and the other way around. Current machine learning systems lack the ability to leverage the invariances imprinted by the underlying causal mechanisms towards reasoning about generalizability, explainability, interpretability, and robustness. Current causal inference methods, on the other hand, lack the ability to scale up to high-dimensional settings, where current machine learning systems excel.

The goal of this workshop is to bring together researchers from both camps to initiate principled discussions about the integration of causal reasoning and machine learning perspectives to help tackle the challenging AI tasks of the coming decades. We welcome researchers from all relevant disciplines, including but not limited to computer science, cognitive science, robotics, mathematics, statistics, physics, and philosophy.

The Distinguished Lecture Series Explores the Different Ways Machine Learning is Used in Research

The Distinguished Lecture series brings computer scientists to Columbia to discuss current issues and research that are affecting their particular fields.

This year, four experts covered topics on how machine learning is used in drug discovery, software testing, RNA splicing, and surrogate loss functions:

  • Regina Barzilay, MIT
    Modeling Chemistry for Drug Discovery: Current State and Unsolved Challenges
  • Koushik Sen, UC Berkeley
    Automated Test Generation: A Journey from Symbolic Execution to Smart Fuzzing and Beyond
  • Oded Regev, Courant Institute, New York University
    Using Machine Learning for Scientific Discovery in Biology
  • Shivani Agarwal, University of Pennsylvania
    Surrogate Loss Functions in Machine Learning: What are the Fundamental Design Principles?

 

Below are a couple of the lectures from prominent faculty from universities across the country.

Automated Test Generation: A Journey from Symbolic Execution to Smart Fuzzing and Beyond
Koushik Sen, UC Berkeley

Surrogate Loss Functions in Machine Learning: What are the Fundamental Design Principles?
Shivani Agarwal, University of Pennsylvania

Bringing High-speed Internet to More Americans

Columbia Engineering professor Henning Schulzrinne unpacks President Biden’s $1 trillion infrastructure bill and its promise to expand broadband access for people in rural and low-income areas.

Mechanism Design for Social Good Started as a Reading Group, Now It Launched an ACM Conference

Computer science research is about solving problems with computational tools — it could be how to predict where the next flu outbreak will occur, how robots can make life easier for senior citizens, or how to fight misinformation on social media. But while computer science (CS) researchers have all the technical know-how they still need to collaborate with people who are on the ground and know about the particular problem or situation. 

A group of graduate students from various institutions and disciplines (CS, Economics, and Operations Research, to name a few) recognized the gap and need for connections and collaboration between the different groups. And so, Mechanism Design for Social Good (MD4SG) was born in 2016, co-founded by Rediet Abebe and Kira Goldner. From a 12-member reading group, the multi-institutional initiative expanded to 2,000 participants involved in working groups, colloquium series, tutorials, and workshops at the ACM Conference on Economics and Computation, at EC’17 and EC ’18. 

Riding the wave of support from researchers, practitioners, governments, and non-profit organizations the inaugural ACM Conference on Equity and Access in Algorithms, Mechanisms, & Optimization co-chaired by Ana-Andreea Stoica, Rediet Abebe, and Irene Lo was organized this October.   

The conference highlighted research where CS, economics, operations research, and social and humanistic sciences intersect and help improve equity and access for historically disadvantaged and underserved communities. A number of Best Paper and Poster Awards were presented at the digital conference.

Ana-Andreea Stoica
Ana-Andreea Stoica

We caught up with Ana-Andreea Stoica to find out more about the conference and why it is important to develop multi-disciplinary research opportunities.

What happened to make you realize that the MD4SG workshops could be expanded into a conference? How did the EAAMO conference come about?

Our technical workshop series has been increasingly growing since its first iteration in 2017. In 2020, we had the first standalone workshop that drew over 130 submissions. Given the rapid expansion as well as the expanded scope, we decided to start this conference series that would provide a better inclusion of all fields relevant to our mission of bridging research and practice for the scope of improving access to opportunity for marginalized communities (e.g. Economics, Operations Research, Computer Science, Sociology, Law). Rediet Abebe, Irene Lo, and I served as Program Co-Chairs for this inaugural conference, working closely with our General Co-Chairs, Illenin Kondo, and Francisco Marmolejo-Cossio, in organizing the first EAAMO conference. 

 

How is the conference different from the MD4SG workshops?

The conference series is a natural continuation of the MD4SG workshop series (given the growth in size and scope since its inception). The conference aims to be inclusive of all the fields that create research related to the mission of our organization, including Economics, Operations Research, Computer Science, Sociology, Law, among others. The conference would also serve as a publishing venue for such research — as an ACM-sponsored conference, our archival track includes papers published with proceedings in the ACM Digital Library.

 

How is the conference creating a space for publishing research that relates to your mission?

EAAMO’21 aims to open avenues for creating and sharing research at the intersection of all the fields I mentioned through both the archival and non-archival tracks. In particular, original research can be published in the ACM Digital Library, where it can be recognized and shared in the research community. We hope that EAAMO can serve our community as a space for interdisciplinary research, in particular for the unique ideas and projects that aim to apply computational tools and humanistic methodologies in improving access to opportunities for marginalized groups. 

 

Why does the group aim to connect computer scientists with other non-computational groups such as non-profits and the public sector?

EAAMO’21 aims to foster an interdisciplinary community that can bridge research and practice in tackling topics such as access to education and healthcare, interventions for poverty alleviation, fairness and privacy in labor markets or data markets, and many other topics related to underserved communities. 

To this end, working with non-profits, the public sector, and practitioners is crucial in order to understand the main issues at stake in each of these applications and to construct research-to-practice pipelines that have an impact on the communities we aim to center at the core of our research agenda. The success of our workshop series and previous and ongoing projects relies on this multi-disciplinary approach and on engaging domain experts working in non-profit organizations, municipalities, and companies. Domain-centered interdisciplinary work has always been the focus of MD4SG activities. 

Since its inception, MD4SG has organized various working groups in which students, researchers, and practitioners work on particular topics of interest. Our current working groups vary from 15 to 100+ people in size each and organize bi-weekly meetings with talks, discussions, and publication goals. Our groups have fostered cross-domain collaborations that led to several publications. As of Fall 2020, MD4SG has also organized working groups around specific geographical regions to foster collaborations on topics of relevance related to mechanism design for social good. 

 

How will the conference facilitate these collaborations?

EAAMO’21 featured keynote talks from leading academics and practitioners in domains related to the conference theme, presentations of submitted papers, problem pitches, datasets, and software demonstrations by participants, problem pitches and product demonstrations from domain experts and practitioners, as well as thematic policy & practice discussion panels with practitioners focused on Latin American topics and migration and asylees topics. 

 

Are you working on any projects that resulted from the MD4SG workshops and EAAMO? Please describe it and how is it going?

Definitely, our working groups are continuously working on projects that stemmed from our work together in MD4SG as well as from the MD4SG workshops. A recent paper that came out of the MD4SG Working Group on Bias and Discrimination can be found here. Other projects currently ongoing are related to provisions for social goods (in the Inequality Working Group for example). My co-organizers have several projects published and ongoing, for example, from the Data Economies Working Group, found on this page.

 

How can people become part of MD4SG?

We encourage people who are interested in joining MD4SG to subscribe to our (low volume) listserv, where we post opportunities to join working groups, events, collaborations, and related activities. Our website contains a detailed description of all of our activities as well.