This is a rapidly expanding area of research, especially with the boom of neural interfaces in recent years, with many groups focusing on various aspects of such devices. Our group is pursuing a full-system approach, from fabrication of the devices to in-vivo testing and to clinical trials. In our system, silicon photovoltaic pixels convert light into electrical current to stimulate the nearby neurons.
Further improvement of visual acuity requires miniaturization of the pixels and faces many challenges. Simply scaling the pixel size down with flat bipolar arrays decreases the penetration depth of the electric field into the tissue, increasing the stimulation threshold beyond the capacity of even the best charge-injection material.
To enable smaller pixels, moving from flat electrodes into a three-dimensional configuration helps mitigate these issues. Additionally, with planar junction diodes isolated by deep reactive ion etched DRIE trenches, carrier recombination at the pixel side walls may limit the light-to-current efficiency, and growing side-wall oxide results in oxidation-related stress in the Si, especially as pixels scale down.
We addressed these limitations by transitioning from planar to vertical junction diodes. In this thesis, I will discuss the limitations on reducing the pixel size and overcoming these limitations by using 1 pillar electrodes, 2 honeycomb electrodes, and 3 vertical junction diodes validated ex-vivo and in-vivo.
I present the design and fabrication processes of such devices and also demonstrate the resulting photodiode functionality, electrode performance, retinal integration with 3-D devices in-vivo and electrophysiological responses. I conclude by discussing the remaining work toward full utilization of such devices and moving toward single-cell resolution. Online 4. Algorithms for black-box safety validation . Corso, Anthony Louis, author. Since human lives are at risk in these applications, we require rigorous safety validation before deployment.
Traditional safety validation approaches such as real-world testing and scenario-based testing in simulation are not scalable to complex systems and environments and may miss unforeseen failures. Formal verification techniques also lack the scalability required for large scale autonomy. The thesis address the safety validation problem with black-box sampling techniques, which assume no knowledge of the design of the autonomous system.
The system takes actions in a stochastic environment and failures are discovered by sampling environmental disturbances. The black-box assumption allows for better scalability to complex autonomous systems and sampling can be combined with machine learning to discover unforeseen failures.
Previous black-box safety validation approaches have been based on optimization, path-planning, reinforcement learning and importance sampling. Although successful for many safety validation applications, existing algorithms may have poor interpretabiliy, scalability, and efficiency.
Black-box sampling approaches can provide example failure trajectories but do not provide a high-level description of failures, as scenario-based approaches do. We present a new technique for generating failure descriptions in the form of signal temporal logic specifications on the environment disturbances. The specifications are optimized with genetic programming to produce failure examples and can used to gain insight into why a failure occurred.
A key contribution of this thesis is the proposal and analysis of a state-dependent sampling distribution to approximate the distribution over failures. The use of the state of the environment produces a more efficient sampling distribution than baseline importance sampling approaches, but may be limited by the size of the state space. To improve scalability, we propose a decomposition technique for multi-agent validation tasks.
Each subproblem is solved independently and the results are combined for better performance than learning from scratch. During the design of an autonomous system, safety validation is performed repeatedly, requiring a large computational expense. We propose a transfer learning technique that can reduce the number of required samples and lead to better performance.
Knowledge from previous validation tasks is transferred to new tasks in the form of value functions that are combined using a learned set of attention weights. Results show improved knowledge transfer between tasks compared to baseline techniques. The safety validation algorithms presented in this work are tested on two gridworld scenarios and two driving scenarios.
A simple gridworld scenario is used to illustrate important safety validation concepts while a gridworld with multiple adversaries is used as a test case for multi-agent validation. A rules-based autonomous driving policy is tested in a crosswalk scenario with a pedestrian and a T-intersection scenario with multiple vehicles.
It is shown that the presented algorithms can improve the interpretability, scalability, and efficiency of safety validation. Online 5. In this thesis, I study a simple randomized algorithm for training neural networks with extremely low memory overhead: "guess the gradient"' GTG. I describe how to efficiently compute the directional derivative of the network's loss with respect to a randomly hypothesized gradient, and use this information to refine the hypothesis into a noisy unbiased gradient estimator that can be passed to a standard gradient descent optimizer.
Previous theoretical work has concluded that in convex settings, GTG-like algorithms suffer an O N slowdown for N-dimensional problems, making them impractical for large-scale deep learning. However, because the directional derivative can be computed without backpropagation, GTG can be run using very little memory.
This valuable property, along with the possibility of a simple novel, to our knowledge variance reduction technique, encourages us to nonetheless try applying GTG in memory-bound deep learning settings. We find that in practice GTG does not perform well on a standard deep learning optimization task — but, curiously, not for the "obvious" reason of O N -slower convergence.
In early phases of training GTG indeed does as well as SGD with a comparable step size; however, in later phases we observe a sudden "plateauing" phenomenon that is as yet unexplained. Understanding this phenomenon could suggest a way to make GTG practical, or, failing that, shed light on the surprising effectiveness of SGD. Online 6. An analysis of stem cell trajectories and their molecular determinants . Wesche, Daniel Jonathan, author. A detailed understanding of stem cells and their properties is required to unlock the vast promises of 21st century biology across topics ranging from regenerative medicine to gene editing to reproductive medicine.
Starting from the discovery of hematopoietic stem cells in , stem cells have been identified in most developing and adult tissues. Of particular interest to regenerative medicine are tissue-specific stem cells. They exist throughout the organism's life and they can thus in theory be isolated from and their abilities be harnessed for most adult patients.
Two central questions have to be answered for every adult tissue of interest: 1 what is the identity of the stem cell in the tissue; and 2 how are the tissue's stem cells properties regulated; specifically, how are they maintained throughout life and what molecular signals induce their differentiation into any of the cell types they can generate? The first question is therefore of interest to the many remaining tissues of which the description of the cellular hierarchy is incomplete.
This includes many cancers, which tend to display slightly altered hierarchical relationships compared to their normal tissue equivalents. A notable exception is the hematopoietic stem cell, which is understood in significantly more detail than any other adult stem cell. The hematopoietic system is thus a prime candidate to study the second question, both regarding general themes of stem cell biology as well as the development of methods of analysis that can eventually be applied to other stem cell systems.
In this dissertation, I examine the utility of gene expression and chromatin accessibility in two contexts: 1 globally to identify stem cells, and 2 regarding specific genes to understand behavioral changes of stem cells. In the first part of this dissertation I address the question of stem cell identity. I first discuss how to identify stem cells and tissue hierarchies in a data-driven manner, starting from scRNA-seq data.
Next, I show the application of these principles to the human liver. The second part of this dissertation concerns stem cell behavior regarding differentiation in the example of the hematopoietic system. In Chapter 4 I assess stem cell heterogeneity and its influence on functional outcomes of differentiation.
Finally, I discuss preliminary evidence for a novel feedback mechanism from the peripheral lymphoid lineage to the hematopoietic stem cell in Chapter 5. Online 7. Gao, Xuhua, author. Description Book — 1 online resource.
Summary Besides the ocean and the atmosphere, the solid Earth is also subject to tidal forces, and the tide-induced deformation of the solid Earth can be observed and utilized to retrieve useful subsurface information. In this work, we illustrated the application of Earth tide analysis in subsurface monitoring by covering topics including extraction of Earth tide signals from downhole pressure data, analysis of extracted tidal signals and computation of amplitudes and phases, analytical relationship between the reservoir tidal response and the theoretical tidal stress for different types of reservoirs, effects of wellbore storage and skin on reservoir tidal responses, the radius of influence of the Earth tide analysis and the atmospheric loading effects.
Tidal fluctuations have been observed in downhole pressure measurements for a long time. We studied the application of data spline and the Savitzky--Golay filter S-G filter in extracting tidal signals from downhole pressure data.
It was found that both algorithms can extract tidal signals effectively with appropriate nodal distance or approximation window size. The data spline and the S-G filter can be combined to extract and smooth the tidal signals.
Discrete Fourier transform can be applied to decompose the extracted signals and compute the amplitude and the phase corresponding to a tidal constituent. The application of a phase interpolation approach and the Hanning window can improve the accuracy of the phase estimation. An integrated tidal analysis approach based on data spine, S-G filter, discrete Fourier transform, phase interpolation and the Hanning window was developed to perform the extraction of tidal signals from downhole pressure data, the decomposition of extracted signals into different tidal constituents, and the computation of amplitude ratios and phase shifts.
Tidal response models were established to describe the relationship between reservoir and wellbore properties and the tidal information obtained from the extracted signals, including the amplitude ratio and the phase shift. The tidal response models were elaborated under different reservoir and wellbore conditions.
For perfectly confined reservoirs without fluid flows, the tidal efficiency and the loading efficiency can be utilized to monitor poroelastic property changes in onshore and offshore reservoirs respectively. General tidal response models were developed for confined reservoirs with only horizontal flows and semiconfined reservoirs with both horizontal and vertical flows.
The skin effect and wellbore storage effect were considered in the analytical models. For confined reservoirs, the amplitude ratio and the phase shift were expressed as functions of dimensionless transmissivity, dimensionless wellbore storage, and the skin factor. We found that higher positive skin factor can lead to more negative phase shifts, and a negative skin factor can potentially lead to a phase advance. For semiconfined reservoirs with vertical leakage, the amplitude ratio and phase shift also depend on the magnitude of the vertical leakage.
The analytical solution for semiconfined reservoirs indicates that larger vertical leakage can cause smaller amplitude ratio and larger phase advance or smaller phase lag. Based on the analytical solution, the effect of vertical leakage can be separated from that of enhanced permeability around the wellbore, and the phase shift contributed by each of the two effects can be evaluated independently.
A tidal response model based on a two-layer radial composite reservoir setting was developed to investigate the effects of radial heterogeneity on the Earth tide analysis. Wellbore storage and skin effects were considered in the tidal response model. The analytical solution indicates that the change in the amplitude ratio gradually decreases as the interface radius increases, and the amplitude ratio eventually converges to a constant value at the radius of influence.
The radius of influence of the Earth tide analysis is positively correlated with the effective diffusivity, which was defined as the ratio of the conventional reservoir diffusivity to the tidal frequency. The results given by the analytical model were compared with those from a reservoir simulator, and the radii of influence obtained from the two approaches were consistent. Finally, the effects of atmospheric loading on onshore reservoirs were studied, and an analytical model incorporating both the Earth tide effects and atmospheric loading effects was proposed.
The solution to the analytical model provided the combined wellbore pressure response to both effects. The wellbore storage and skin effects were incorporated in the combined model, and it was found that larger wellbore storage or skin effects can result in smaller amplitude and longer time delay of the combined response.
Online 8. Applied single-cell methods for basic and translational immunology . Glass, David Richard, author. Understanding the role of each cell in that network requires accurate quantification of informative biological features of single cells. Here, we innovated and applied single-cell methods and purpose- driven computational analyses to problems in basic and translational immunology. We developed a highly-multiplexed screen to quantify the co-expression of surface molecules on millions of human B cells.
We identified differentially expressed molecules and aligned their variance with isotype usage, VDJ sequence, metabolic profile, biosynthesis activity, and signaling response. Based on these analyses, we proposed a classification scheme to segregate B cells from four lymphoid tissues into twelve unique subsets, providing a framework for further investigations of human B cell identity and function. Additionally, we introduced morphometry, a high-throughput, quantitative, single-cell mass-cytometry-based assay that measures cell morphological features by their underlying molecular components.
We applied multiplexed morphometric profiling and surface molecule immunophenotyping to 71 diverse clinical hematopathology samples and demonstrated that our approach was superior to flow cytometry and comparable to expert microscopy for tumor cell identification and enumeration.
We introduced linear discriminant analysis LDA to generate morphometric maps that facilitate visualization and quantification of tumor cells. This contextualization of traditional surface markers on independent morphometric frameworks permits more sensitive and automated diagnosis of complex hematopoietic diseases.
Online 9. Applying super-resolution microscopy to investigate the regulatory structure of the genome . Mateo, Leslie Johanna, author. Both the spatial and temporal expression of a gene are largely regulated by non-coding sequences in the genome. The genome is folded into compartments, topological associated domains TADs , and loops, as determined by sequencing-based technology such as Hi-C.
Many of the differences in cell type arise from specific interactions between distal enhancers and their target promoters, which are typically located thousands to hundreds of thousands of basepairs apart. Long-range enhancer and promoter activity and the specific of enhancer-promoter interactions are believed to arise from the cell-type specific genome folding.
How this genome organization is established and regulated during development is not well understood. Hi-C and other sequencing-based assays lack information pertaining to the spatial organization of cells in tissues, and largely provide population-level information, not single cell, which makes it challenging to understand how genome folding might contribute to differences among cell types. Thus, there is a great need for approaches that provide a view of the chromatin organization and transcriptional activity in single cells.
Here, I present my work developing and using a super-resolution technique to gain such an unprecedented view. We discovered that single cells do have TAD-like structures that are heterogeneous across cells. However, the boundary positions of these single cell TADs do preferentially lie at insulator boundary protein CTCF and cohesin binding sites. Although depletion of cohesin is crucial for the presence of TADs at the population-level, we found that the TAD-like domains in single cells are not dependent on cohesin.
Thus, my findings using ORCA in cultured cells Chapter 2 shed important new light to genome organization in single cells. My interest in gene regulation led me to expand our microscopy approach by making ORCA compatible with multiplex RNA imaging to enable direct correlation between chromatin structure and gene expression on a cell-by-cell basis.
Furthermore, I expanded our experimental system by applying ORCA to cryosectioned Drosophila embryos to investigate the role of 3D genome structure in loci, such as in the bithorax complex BX-C , with well-studied enhancers. Using embryos with genetic perturbations allowed me to determine that the genetic elements at TAD boundaries drive proper cell-type specific enhancer-promoter contacts and gene expression. My results Chapter 3 suggest that architectural proteins, such as CTCF and cohesin, at TAD boundaries are responsible for the establishment of 3D organization during development.
Additionally, my results emphasize the need to study cell-type specific chromatin structures on a cell-by-cell and cell type basis, an area that is still largely unexplored. To facilitate such exploration, I worked towards making our approach accessible to other researchers that are interested in 3D genome architecture and transcriptional activity Chapter 4.
To determine the role of architectural proteins in genome organization Chapter 5 , I took advantage of Drosophila genetics and obtained null allele mutant embryos that lacked zygotic expression of architectural proteins such as Rad21, Wapl, CTCF, and CP However, as the maternal transcripts for these architectural proteins were present throughout embryogenesis, the maternally encoded proteins appeared to be sufficient to retain genome structure in the zygotic null mutants.
My results raise the probability that other Drosophila insulator binding proteins, such as CP, may play a redundant insulation function. To examine the role of various cis-acting insulator elements, I have begun preliminary studies in investigating how inserting insulators into the genome affects long-range cis-regulatory interactions Chapter 6. Overall, the development of ORCA has enabled us to begin understanding the mechanisms underlying genome organization and their role in regulating transcription in a complex tissue.
As our techniques improve and becomes more accessible to other researchers in the field, we are certain that the methods we have developed will play a role in un-covering the function of various chromatin components, such as transcription factors and epigenetic state, in establishing the 3D genome organization during development. Online Artistic vision : providing contextual guidance for capture-time decisions .
E, Jane Little, author. Many of these creative choices happen in real time during the capture process, as the photographer takes in the scene around them and navigates a space of so many possibilities and uncertainties. However, today's resources for learning photography, such as books, classes, and example photos, are largely disconnected from the capture process.
Photographers are therefore faced with the task of navigating, in real-time, a seemingly infinite space of possible creative choices while relying on a disconnected space of learning resources that can feel both inaccessible and overwhelming in the moment.
The primary insight of my research is that real-time contextual guidance, embedded directly in the camera, can make accessing relevant parts of this wealth of information more approachable and actionable. The feedback assists in cutting through the noise of endless possibilities and focuses photographers' attention on targeted, meaningful creative choices. My dissertation presents a set of capture-time interfaces that provide real-time contextual guidance.
This guidance takes the form of light touch cues presented as automatically generated visual overlays, where each overlay is designed to focus on a specific photographic concept. Each interface's goal is to understand what an expert might be noticing in considering the targeted photographic concept and to, via an annotation overlay, direct a novice user's awareness in a similar manner.
In designing this real-time contextual guidance, I take inspiration from photographers' current practice of directing attention through manually drawing annotations onto photos. Today, this practice is mostly restricted to post-hoc feedback used to point out specific decisions or potential mistakes that the artist made. I develop algorithmic approaches designed to understand conceptually relevant aspects of the scene that the photographer is viewing.
These algorithms generate annotations that are displayed in the camera in real time. The annotations can move beyond explaining why a specific decision was made, towards helping the photographer become aware of artistic choices that could be made, providing guidance while encouraging creativity and exploration. Through the overlays, we hope to help novices train their eye to see in the way that experts do.
Specifically, I present in-camera guidance interfaces tackling three important photographic concepts: portrait lighting, composition, and decluttering. The portrait lighting tool helps users be more aware of the available lighting styles and reorient their subject to best achieve the lighting style of their choice. The composition guidance tool makes users more aware of the current composition by highlighting lines in a composition grid that are most relevant to the camera view.
The decluttering tool increases users' awareness of clutter that would draw attention away from the main story of the image by abstracting the camera view to outline edges around the subject s or the image borders. For each interface, I describe my process for designing a novice-interpretable visualization and how it captures context relevant to the target concept.
I then evaluate each interface by asking novice photographers to take photos with these tools while focusing on their target concept. Together, these tools and their evaluations demonstrate that such awareness-based visual guidance camera interfaces can help people be more intentional about their artistic choices.
By making users more aware of possible options and mistakes, the interfaces introduced in this dissertation encourage users to explore the space in a more informed manner. In this way, the tools presented in my dissertation help users become more confident in their ability to achieve their artistic goals. Cheung-Miaw, Calvin Ryan, author.
In particular, it explains why Third Worldism - the belief that Asian American, Chicanx, African American, and Native American communities faced analogous, though not identical, situations of racial oppression - went from being ensconced within Asian American Studies to appearing untenable to its former adherents, over the course of three decades.
I argue that this shift developed from theories Asian American intellectuals mobilized in response to conflicts in the s that pit Asian Americans against other communities of color. Drawing on Asian American Studies publications across fields ranging from legal studies to literary theory, as well as privately held collections, unprocessed records, and archival research, I explore the field-defining debates over Asian American political behavior, class, gender, educational access, and multiracial solidarity, from the beginning of the field in the late s to the turn of the 21st century.
I show how Third Worldism inspired Asian American Studies scholars - Asian Americanists - to develop analytical frameworks based on the idea that a unique anti-Asian racism affected all Asian Americans and provided the potential basis for ethnic and multiracial solidarity. These frameworks, however, produced unintended consequences. As rapid demographic changes within Asian America generated greater levels of ethnic and class diversity, and as those changes precipitated conflicts with other communities of color, the belief that anti-Asian racism grounded a common Asian American group interest actually led Asian American intellectuals to conclude that Asian American group interest might diverge from those of other communities of color.
In providing the first intellectual history of Asian American Studies, the project locates the historical roots of contemporary controversies over relations between Asian American communities and other communities of color. Students may use this sample template as a guide. If the committee is not correctly listed, such as a missing committee member or a commmittee member that should be removed or simply no reading committee members are listed in Axess, students contact their department Student Services Officer to have the information updated before beginning the eForm process.
Log on to Axess, and select the Student tab, then click on the " Student eForms " quick link. After you submit this form, it will be manually reviewed by the Registrar please allow 2 business days. You will receive notification from the Registrar letting you know that the eForm has been accepted and you can submit your dissertation.
To accommodate this processing time, we suggest students submit this form at least 48 hours prior to the Dissertation Deadline of the effective quarter. This will allow sufficient time for staff to process and enter forms. Note: These preparation guidelines are minimum standards for professional presentation of your doctoral work. The Office of the University Registrar, which is responsible for administering dissertation and thesis submission, encourages students to ask questions about format before final preparation of the manuscript.
A non-conforming submission may have to be redesigned and resubmitted, with a possible delay in degree conferral. Previously approved dissertations are not a reliable guide for preparation of a manuscript as guidelines may have changed. Stanford is committed to the preservation and dissemination of the scholarly contributions of its students.
However, the degrees will be posted at a to-be-determined date in the week of June This means that Ph. The electronic submission process is not available for master's theses or undergraduate honors theses. The above slide presentation, produced by Stanford University Libraries in consultation with the Office of the General Counsel, is designed to inform students about copyright issues, in particular the choices and decisions a student faces in the process of submitting a dissertation or thesis electronically.
For further information on preparing and submitting doctoral dissertations, Engineer degree theses, and D. The Registrar's Office is proud to be part of Student Affairs , which educates students to make meaningful contributions as citizens of a complex world.