Semla: a versatile toolkit for spatially resolved transcriptomics analysis and visualization

Abstract Summary Spatially resolved transcriptomics technologies generate gene expression data with retained positional information from a tissue section, often accompanied by a corresponding histological image. Computational tools should make it effortless to incorporate spatial information into data analyses and present analysis results in their histological context. Here, we present semla, an R package for processing, analysis, and visualization of spatially resolved transcriptomics data generated by the Visium platform, that includes interactive web applications for data exploration and tissue annotation. Availability and implementation The R package semla is available on GitHub (https://github.com/ludvigla/semla), under the MIT License, and deposited on Zenodo (https://doi.org/10.5281/zenodo.8321645). Documentation and tutorials with detailed descriptions of usage can be found at https://ludvigla.github.io/semla/.


Package dependencies
These are the following package dependencies, imports, and suggestions for semla (version 1.1).

Operating System testing
The semla R package was developed using MacOS and has been further tested on multiple operating systems (OS) to ensure it can be installed correctly and functions without error.OS unavailable to the development team for local testing were tested using the R package "rhub" (Csárdi et al. 2023) for remote checks using the R-hub builder.

Usage description
We have built a web-based application called the Feature Viewer, that allows interactive exploration and labeling of the SRT data set.The application is written in javascript and uses the React UI library.Some of the functionality is similar to what you can do with the Loupe Browser (10x Genomics) or the ST viewer (Fernandez Navarro et al. 2019).In comparison to interactive user interfaces provided in other R packages for Visium data analysis (SpatialLIBD (Pardo et al. 2022), Seurat (Hao et al. 2021), SPATA2 (Kueckelhaus et al. 2023), Giotto (Dries et al. 2021)), the Feature Viewer in semla allows users to interactively explore the histology image in higher resolution at different levels of magnification, select among thousands of features to plot, easily switch between the available samples, and use a selection tool to annotate spots and save them as labels for downstream analysis.
To use the Feature Viewer, the user first needs to import their SRT data with semla and further load the coupled histology image, using the LoadImage() function.Once the data of interest is available in R, the Feature Viewer can be initiated with the semla function FeatureViewer(), Unset se <-FeatureViewer(se) where "se" represents a Seurat object compatible with semla.Running this command within an R environment will open a new web browser window containing the interactive user interface with your data.If using a very large image, it is possible to speed up the process of engaging the Feature Viewer by tiling the image beforehand using the ExportDataForViewer() function (this process can take some time, read under Performance for additional information).Supplementary Figure 1 illustrates the user interface of the Feature Viewer application.
Supplementary Figure 1.Feature Viewer used to visualize the mouse colon demo data set available within semla.
To save and exit the Feature Viewer, the user needs to press the "save & quit" icon and may thereafter close the browser window and return to the R session.Any annotations made with the lasso tool will be saved as additional metadata columns in the Seurat object when the Feature Viewer is closed.The R session will be occupied as long as the Viewer is active.
In addition to the Feature Viewer application, semla provides another interactive user interface (UI) for aligning samples (Supplementary Figure 2).With the alignment application, the user can perform

Performance
The Feature Viewer employs image tiling, through the TileImage() function, to enhance interactivity with H&E images.The computation time for the tiling step depends on the number of samples, the desired number of zoom levels and the size of the input images.For a single tissue section data set with an H&E image approximately 2,000x2,000 pixels large, the tiling step with TileImage() completes in a few seconds.For reference, in a test dataset, the tiling process took approximately 3 seconds to complete on a MacBook Pro laptop (2017 model, 3.1 GHz Quad-Core Intel Core i7, 16GB RAM).TileImage() automatically determines the suitable number of zoom levels based on the input image's size, for instance, a 2,000x2,000 H&E image results in the creation of three layers containing tiles of sizes 2x2, 4x4, and 8x8.Once the tiling step is completed, the tiles are stored locally for future use and the viewer can be launched almost instantaneously in the default browser.
For executing the image tiling separately, a utility function named ExportDataForViewer() can be employed where an option to accelerate the tiling process by utilizing multiple threads is available.When dealing with larger H&E images and/or multiple tissue sections, a greater number of tiles must be exported, thus extending the overall processing time.The image tiling performed through this approach is a one-time operation, and will therefore allow the Feature Viewer to be engaged quickly for as long as the exported tiled images are available.Upon supplying the tiles, the FeatureViewer() function can be invoked to promptly open the viewer, which launches almost instantaneously within a web browser.

NNLS cell type mapping methodology description
Semla includes an approach, based on Non-Negative Least Squares (NNLS), for inferring cell type proportions in each spot.The method requires paired Visium and annotated scRNA-seq data generated from the same source.In short, the method first estimates cell type enrichment scores from the annotated cells in the scRNA-seq data by comparing their normalized and averaged gene expression profiles.This scoring scheme assigns higher weights to genes that are cell type specific, thereby providing a profile that describes relative differences between the cell types.The enrichment profiles are subsequently leveraged into the NNLS method to predict the composition of cells in the Visium data.Given the Visium gene expression matrix A and the cell type enrichment profiles y, the NNLS method attempts to solve the following problem: where the solution for x represents the cell type estimates in the Visium data.

Unset
Within semla, the NNLS cell type mapping is executed by running the RunNNLS() function, providing the spatial object of interest along with a normalized scRNA-seq Seurat object and specifying the metadata column containing the cell group annotations: With a reasonably sized data set, the analysis should finish in a matter of seconds.The resulting cell type proportion estimates are stored in a new assay, "celltypeprops", residing within the spatial object, that is easily accessible for downstream exploration and visualization.

Benchmarking the NNLS methodology
To assess the utility of the NNLS cell type mapping approach, we have performed a few comparisons of NNLS with other established cell type mapping methods in order to benchmark its performance.The articles outlining the comparisons and containing the code for reproducing the plots are accessible on the semla website.

Publicly available Visium data
Initially, we tested the performance of NNLS, stereoscope (Andersson et al. 2020), and cell2location (Kleshchevnikov et al. 2022) using publicly available Visium data sets and tissue type matched single cell RNA-sequencing (scRNA-seq) data sets, all publicly available (Supplementary Table 1).Stereoscope and cell2location were run in python on high-performance computing servers using default parameter settings, while NNLS was run locally on a laptop.Examining the Pearson correlation between the inferred cell type compositions between the different methods, we could observe high concordance for both tested tissue types (Supplementary Figure 3).Unfortunately, the ground truth of the actual cell type compositions in each spot of the Visium data is unknown, making it impossible to evaluate the actual accuracy of the NNLS method with these data sets.However, both stereoscope and cell2location are two well established methods for estimating cell type abundances, and thus we deem the NNLS approach to produce comparable results.

Mouse kidney
Tabula Muris Senis droplet data from kidney, made available by The Tabula Muris Consortium.

Supplementary Figure 3. Comparison of output from cell type mapping algorithms NNLS, stereoscope, and cell2location, on mouse brain (A) and mouse kidney (B) data sets, which demonstrates overall high
Pearson correlation values for most cell types.

Synthetic data
To evaluate the performance of the NNLS approach using a data set where the ground truth of cell type proportions per spot is available, we prepared a synthetic Visium data set.In order to make the synthetic data set representative of a typical Visium data set, we opted to generate synthetic spots with an average of 10 cells and with a median of 20,000 UMIs per cell.A count matrix from a single-cell RNA-seq (Smart-seq 2) data set obtained from the Allen Brain atlas (Tasic et al., 2016) with 14,249 cells was used to create the synthetic spots.Only the top 5,000 most variable genes were kept in the UMI count matrix to speed up computation run time.Based on the distribution of UMI counts per cell in the scRNA-seq data set, we downsampled the count matrix to 1% using the downsampleMatrix function from the scuttle R package.Next, we generated cell counts by drawing 10,000 random numbers from a poisson distribution with the mean set to 10 (number of cells per spot).Zero values were replaced with a value of one to make sure that each synthetic spot would include at least one cell type.Each synthetic spot was created by sampling and aggregating N random cell expression vectors, where N represents the number of cells per spot.Sampling probabilities were biased to reflect the composition of cell types in the scRNA-seq data set, thereby ensuring that the abundances of cell types in the scRNA-seq data set was reflected in the synthetic Visium data.The ground truth corresponds to the proportion of cell type labels for each synthetic spot.For run time performance assessment (NNLS and Seurat label transfer), we used the same approach to create a large synthetic data set with 100,000 spots.For all benchmark analyses, the scRNA-seq data set was downsampled to include a maximum of 250 cells per cell type and filtered to exclude cell types with fewer than 10 cells.

NNLS
The NNLS technique was executed using a cap of 250 cells per cell type as the upper limit and a minimum of 10 cells per cell type.In order to evaluate runtime performance, the NNLS approach was employed in 10 cycles, deconvolving expression data for 10,000 to 100,000 spots in the large synthetic data set (Supplementary Figure 4).All computations were run on a Macbook Pro (2017, 3.1 GHz Quad-Core Intel Core i7, 16GB).

Seurat label transfer
The Seurat label transfer method was used to calculate cell type prediction scores from the synthetic Visium data following the Seurat tutorial on integration with 'single-cell' data.The standard logNormalize method was used to normalize the UMI count matrix.For run time performance assessment, the Seurat label transfer method was run in 10 iterations, deconvolving the mixed expression data for 10,000 to 100,000 spots in the large synthetic data set.All computations were run on a Macbook Pro (2017, 3.1 GHz Quad-Core Intel Core i7, 16GB).

RCTD
The RCTD method was run using standard parameter settings following their RCTD tutorial on Visium data, with the doublet mode set to 'full'.Seven cores were used for the computation.For the RCTD deconvolution, a Macbook Pro (2017, 3.1 GHz Quad-Core Intel Core i7, 16GB) was used.

Stereoscope
For deconvolution with stereoscope, we used the scVI (https://scvi-tools.org/) implementation, following the tutorial 'STereoscope applied to left ventricule data'.First, a model was trained on the scRNA-seq data with 1,000 epochs.Next, the model was used to deconvolve the synthetic Visium expression profiles in 2,000 epochs.For the stereoscope deconvolution, we used a NVIDIA A100-SMX4-80GB Tensor core GPU.

Cell2location
For deconvolution with cell2location, we followed their tutorial 'Mapping human lymph node cell types to 10X Visium with cell2location'.First, a model was trained on the scRNA-seq data with 1,000 epochs.
Next, the model was used to deconvolve the synthetic Visium expression profiles in 30,000 epochs.For the cell2location deconvolution, we used a NVIDIA A100-SMX4-80GB Tensor core GPU.

NNLS computation time
The NNLS method can be used to deconvolve Visium data in a matter of seconds, even for larger data sets.Supplementary Figure 4 shows that the average computation time for 100,000 spots is 4 seconds.

NNLS vs Seurat label transfer computation time
The second fastest method included in this benchmark is the Seurat label transfer method used for cell type prediction.Although not a pure deconvolution method, this technique facilitates the probabilistic transfer of labels from a reference (single-cell) to a query (Visium) dataset.As depicted in Supplementary Figure 5, it's evident that the NNLS method consistently outperforms the Seurat label transfer method in terms of speed.Supplementary Figure 6.Computation time for NNLS, Seurat label transfer, RCTD, stereoscope and cell2location.The y-axis shows the computation time in seconds (log 10 scale).

Performance assessment
In the assessment of method precision, we employed two performance metrics: Pearson correlation and root mean square error (RMSE), visualized in Supplementary Figures 7-9.The Pearson correlation scores highlight that RCTD and cell2location present the most robust correlation regarding the inferred and estimated cell type proportions.Conversely, NNLS and stereoscope exhibit slightly diminished correlation concerning specific cell types.The Seurat label transfer method generally showcases reduced correlation values across all cell types.The RMSE values follow a comparable pattern, with NNLS, RCTD, stereoscope, and cell2location demonstrating similar performance.

Conclusion
Our benchmark analyses on synthetic Visium data indicate that the NNLS approach generates results that are comparable with RCTD, cell2location, and stereoscope.Notably, in contrast to these approaches, NNLS operates within a matter of seconds for datasets of up to 100,000 spots, without the need for hardware acceleration.RCTD and cell2location generated more accurate proportion estimates compared to NNLS and we therefore encourage users to test these more robust and well-established cell type deconvolution algorithms.Nonetheless, we do recognize that NNLS presents a viable alternative for fast cell type decomposition, which proves particularly beneficial during various stages of exploratory data analysis.
For instance, cell type deconvolution of SRT data often necessitates careful curation of the single-cell reference data set to make sure that the cell types included for deconvolution are present in the tissue section.This can be particularly challenging when working with large atlases which encompass whole organs or even multiple organs.Additionally, creating cell atlases from scRNA-seq data entails iterative refinement of quality filters and tuning of hyperparameters.Cell type deconvolution offers valuable insights for informing these choices by spatially mapping the cell types, yet this iterative process can be hindered by computation time and hardware constraints.With a fast deconvolution approach such as NNLS, this curation step becomes more time efficient and accessible to users who are not familiar working with high performance machines.Once the foundation of a single-cell reference is laid, users can subsequently employ a more robust deconvolution technique to attain enhanced accuracy in their proportion estimates.

Label Assortativity and Neighborhood Enrichment
The term assortativity is used within network science to describe the connectivity between nodes of similar properties.Traditionally, this is measured in terms of the node's degree and computing a correlation coefficient between nodes of similar degrees.A few methods based on this are Newman's Assortativity (Newman, 2002) and Ripley's Function (Ripley, 1976).Inspired by this, we have developed a straightforward approach to estimate the connectivity of spots belonging to the same cluster, i.e. sharing the same label, by measuring the network's average degree, ⟨k⟩, for each label and comparing this to a completely randomly dispersed pattern.The randomly distributed pattern can be viewed as the baseline, since we rarely obtain a pattern more dispersed than that, while a group of spots that is fully connected with each other is the highest order of organization we can achieve.In this method, each label's ⟨k⟩ is therefore min-max scaled between the ⟨k⟩ of a randomized network (min) and the ⟨k⟩ of the fully connected network (max).The output from this analysis, executed using the RunLabelAssortativityTest(se) function, is a table containing a scaled average degree for each unique label provided for the analysis, along with stored values of the intermediates used for the calculations of the final score.The scaled average degree ranges between ~0-1, with values around 0 being equal to a randomly dispersed spatial pattern and values closer to 1 indicating an aggregated and highly connected organization of the spots of that label.
The purpose of a neighborhood enrichment analysis on the other hand is to test whether spots belonging to two different categories are localized next to each other spatially.To estimate the enrichment of their co-localization we compare it against random permutations of the labels, forming the null hypothesis stating that the spots of the two labels are distributed randomly and share no connections other than those observed by chance.A z-score for each label pair is calculated as: where x AB is the number of edges observed between spots of labels A and B, μ AB is the permutation mean of the edges between A and B, and σ  is the permutation standard deviation of the edges between A and B. Thus, a z-score of around 0 can be interpreted as a spatial label co-localization equal to that seen by chance, given the number of spots within those categories, while a positive z-score indicates an over-representation of the label pair proximity and a negative z-score can be viewed as a depletion, or repellant effect, of the label pair spatially.The neighborhood enrichment analysis is run by calling the function RunNeighborhoodEnrichmentTest() and stores the results into an output table containing the z-scores between each label pair.
The Label Assortativity and Neighborhood Enrichment tests included in semla are implementations and further developments of the heterotypic and homotypic scores described in Bäckdahl et al (2021).

Digital unrolling
The "digital unrolling" approach was first presented by M. Parigi and colleagues (Parigi et al. 2022), developed to digitally unfold mouse colon samples rolled up into "swiss-rolls", enabling the analysis of gene expression variation along the proximal-distal axis of the organ.Semla includes a new strategy for "digital unrolling" that is conducted in two steps.In the first step, a spatial undirected network is created from the Visium spot coordinates, representing each spot as a node in the network where adjacent nodes are connected by an edge.The spatial network can then be visualized and modified using the On a final note, the cutting tool for digital unrolling provided in semla could in theory be used to unfold other tissue types.For instance, in the small intestine the tool could be used to separate circular folds (plicae circulae) or individual villi.This extension could yield supplementary insights that could be harnessed to analyze and model spatial patterns within these specific structures.The "unrolling" algorithm, which takes a connected spatial network as input, makes certain assumptions about the shape of the tissue and is therefore only useful for special applications.However, it proves highly advantageous when tasked with swiftly computing distances between endpoints within a fully connected spatial network.As exemplified in Supplementary Figure 14, this approach is demonstrated in a scenario where spots were manually selected using the Feature Viewer tool, followed by distance calculations using the AdjustTissueCoordinates algorithm.
The spatial omics field is rapidly evolving, with new computational tools and frameworks to handle the data being developed with high frequency.As of version 3.2 of Seurat (2020), they introduced new functionalities for processing Visium and Slide-seq data, allowing the user to visualize their spatial data and perform integration with annotated scRNA-seq data to infer spatial localization of cell types.With Seurat's latest release, the version 5 beta (2023), they have moreover added support for imaging based SRT data (Vizgen MERSCOPE, 10x Genomics Xenium, Nanostring CosMx, and Akoya CODEX) and further developed a few specialized methods such as niche identification.However, more specialized spatial analyses, such as those presented in the SquidPy and the Giotto packages, are still missing within Seurat.These spatial analyses may involve tools for computing spatial statistics or identification of spatial gene co-expression modules.
SpatialExperiment (Righelli et al. 2022) is another R based framework for efficient handling of SRT data, comparable to the AnnData format available in python.Toolkits utilizing the SpatialExperiment object format, such as spatialLIBD (Pardo et al. 2022), have started to emerge and provide a promising alternative avenue for SRT data analysis in R. Likely due to the youth of these toolkits, the size of the community of people developing new analysis methods is however limited for this object format.The same can moreover be said for R packages that have developed their own object structure for SRT data, such as Giotto (Dries et al. 2021) and SPATA2 (Kueckelhaus et al. 2023).

Interactive UI tools
Interactive visualization of SRT data is of great benefit for anyone interested in exploring their spatial data, both researchers with limited programming expertise and experienced bioinformaticians in need of browsing the data quickly and/or annotating selections of data points.10x Genomics provides their Loupe Browser application for exploration of Visium data, which includes useful features such as spot annotation and rapid visualization of gene and cluster features, but is however limited to the use of the cloupe Space Ranger output files, one sample at the time.Seurat includes an interactive Shiny application for visualizing spatial data, though it is designed as a plain plotting tool for visualizing features and lacks any ability to create new spot labels.Other interactive UI applications for Visium data are available and address various aspects of interactive data exploration, such as multi-sample handling or incorporation of statistical analysis tools within the application.For instance, spatialLIBD is a powerful interactive UI application, powered by Shiny and Plotly, for exploring SRT data in a SpatialExperiment format.In comparison with the Feature Viewer provided in semla, it is not possible to return manual spot annotations to the R object in a one-step process with spatialLIBD, and it does moreover not handle zooming of the tissue image in an efficient way that allows the user to utilize a high resolution image for detailed exploration of their data in its histological context.

Spatial platform compatibility
As mentioned previously, Seurat is currently able to handle SRT data generated from several platforms.
The Giotto and Squidpy libraries also include support for additional packages than Visium.Semla on the other hand has been developed for the primary intent of processing and analyzing Visium Gene The part of the semla package allowing for additional spatially resolved omics platform support is still under development, and it is our ambition to make semla compatible with more platforms in order to allow for integrated multi-platform analyses in the future.

Conclusion
Our vision with a specialized R package for SRT data analysis, is that you should be able to perform all your desired exploratory work and spatial analyses with the convenience of not having to convert your R object for various analysis steps.Therefore, we see it as a large benefit to base semla on the widely popular Seurat object structure, which allows the users to tap into all tools developed for scRNA-seq

Limitations
Semla is an R package intended for processing, analysis, and visualization of SRT data, with a specific focus on the Visium Gene Expression platform.It is built to extend the Seurat toolbox and offers a wide variety of functions, designed with the intention to provide the basis for unleashing the full potential of your spatial data.While semla is readily available for use from its initial release, there are some limitations we would like to highlight.
SRT platform support.As mentioned, semla is currently designed specifically for Visium Gene Expression data.While this is the case, it is possible to load spatial data of other origins with semla, albeit limited in function compatibility of certain analysis tools and with a lack of easily applicable functions for loading other data types.These are aspects we aim to address with future updates of semla.In the meanwhile, we encourage users with other types of spatial omics data than Visium to use alternative packages, such as Seurat (v. 5 beta) or Giotto.
Data structure.Semla is built upon the Seurat framework, and relies on the Seurat object structure for storing expression assays, dimensionality reductions, meta data, and other associated data/information.
For Visium datasets, semla adds additional information about the spot coordinates and optionally the H&E images.This additional information typically consumes much less memory compared to the count assays stored within the Seurat object, thus exerting minimal impact on performance.However, with Seurat v5, which at this time is a beta release, new infrastructure is offered to handle even larger datasets with millions of cells (or spots) even if the data cannot be fully loaded into memory.As of selma's current version (v.1.1), we have not implemented support for any other format other than Seurat (e.g, SpatialExperiment), although semla has been designed to ease a future implementation of such support.
Image processing.While it is possible to load other images (PNG or JPEG format) than the H&E image provided among the Space Ranger output files, such as Visium IF images, it is important to ensure that its size and alignment is equivalent to that generated by the Space Ranger pipeline, which may require manual adjustment with an image editing software (e.g Adobe Photoshop).Image processing functions, such as MaskImages(), might fail for certain histological images, for instance when staining artifacts are present.There are a number of potentially useful image processing functionalities that are currently missing in semla, for instance cell segmentation tools or feature extraction methods.
Multi-sample alignment.Semla includes an interactive sample alignment tool which can be used for alignment of tissue sections with similar shape and histology, for instance consecutive tissue sections.The alignment tools handles rigid transformations such as rotation, scaling, and translation.Any non-linear global or local distortions of the samples would on the other hand be difficult to account for using only rigid transformations.Moreover, the alignment tool cannot perform spot-level alignment across multiple sections to form a common coordinate framework or a "consensus slice".As this is a non-trivial task, we would recommend users looking to perform such transformations to explore other tools developed for this

Unset
transformations such as rotations, mirroring, and moving to the images of multiple tissue sections.The tool is available through the RunAlignment() function, se <-RunAlignment(se) More details on how to use the alignment tool is available on the semla tutorial website https://ludvigla.github.io/semla/articles/image_alignment.html.Supplementary Figure 2. Example case of aligning three tissue sections to achieve matching tissue orientation and size.A) Sections 1-3 before alignment.2) The interactive alignment application is used to adjust the individual images before saving the transformations.3) The newly aligned images can be used for generating spatial feature plots.

Figure 4 .
Computation time for NNLS on synthetic Visium data.The x-axis shows the number of spots included in the synthetic Visium data and the y-axis indicates the computation time in seconds.Each data point represents the average computation time across 10 iterations.The whiskers indicate the standard deviation.
CutSpatialNetwork()function from an R session, which opens an interactive tool in the web browser.The interactive tool overlays the spatial network on top of the histological H&E image of the Furthermore, there exists the potential to model gene expression as a function of distance, facilitating data-driven extraction of spatial regionalization patterns along the proximal-distal axis.Supplementary Figure 13.Gene expression trends for four selected marker genes along the proximal-distal axis of a mouse colon.The x-axis shows the distance along the proximal-distal axis.The y-axis shows normalized gene expression values.
analysis with Seurat and piggy-back on the fantastic development that the Seurat team produces with their latest version of the package.Initiating a new semla object with your Visium data, will thus automatically allow you to apply any Seurat compatible tool as well as open up the possibility to utilize all the specialized functions provided in semla.The framework of semla is designed to be flexible and allow for continuous development to include the potential support of SpatialExperiment data formats or support for additional spatial platforms besides Visium. .0) Expression data, where a matching histological image is available.Visium is to date the most widely used SRT platform, based on available published datasets, and the need for accessible analysis tools is therefore greater than for other platforms.Besides the original Visium Gene Expression platform for fresh frozen tissue samples, using the poly-A capture chemistry, semla can also handle output data from the probe-based Visium FFPE platform, with or without CytAssist (including the extra large (XL) capture areas), Visium with immunofluorescence (IF) images, and the under-development Visium high definition (HD) platform.For Visium IF, the images will have to be registered to the hematoxylin and eosin (H&E) stained section image, enabling the user to replace the H&E image with the IF image when loading the image with semla.However, the image processing functionalities provided in semla, such as masking of the background through tissue outline detection, have been optimized for H&E images and will not produce desirable results in its default mode.Instead, users can pass a custom method to the Despite the Visium-centric development of semla, the package is however not necessarily limited to Visium data.As long as there is a feature*spot matrix and a spot coordinate file available, semla can import the data.Although, there are currently no functions within semla (v.1.1) designed to handle output files in other formats than as generated by the 10x Space Ranger pipeline.Taking advantage of Seurat's spatial data support, there is nonetheless the possibility to import Slide-seq data into semla by converting the Slide-seq Seurat object into a semla compatible object, as outlined below and in the article "Slide-seq data" on the website (https://ludvigla.github.io/semla/articles/slide-seq.html).Given the lack of a histological image in Slide-seq data, all image related functionalities of semla will be inaccessible, including the Feature Viewer, when working with this data type.Other spatial analysis tools may Unset moreover work differently, and require some caution, given that most functions have been developed with the Visium data format in mind.