Author Topic: Post processing and data analysis  (Read 12764 times)

Gareth D Hatton

  • Professor
  • ****
  • Posts: 51
Post processing and data analysis
« on: September 27, 2016, 02:05:27 AM »
I am not sure where this discussion should be but I thought it would be nice to have a section on post processing data analysis and possibly automation of these tasks.  I am sure we all have scripts and other programmes we use to look at the data.  Ben has started this with his postings of some scrips:

Beam diameter:
http://probesoftware.com/smf/index.php?topic=775.msg4814#msg4814

Surfer script to convert grid images for imagej:
http://probesoftware.com/smf/index.php?topic=739.msg4816#msg4816

Searchable Excel L-value table:
http://probesoftware.com/smf/index.php?topic=537.msg2996#msg2996

In this spirit I have attached some of my code I use in R to look at k-means clustering and plotting of data.  A lot of this is fairly basic but it will save others time when trying to input and output data using R https://www.r-project.org/.

Within here is k-mean clustering, already part of PfE but also cluster validation, this is a tricky subject but is important when determining what k is in k-mean clustering.  Currently I have no clear answer as how best to determine k but there is an excellent examples here: http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters.  I personally, as you will see from the script attached, use the Within groups sum of squares, the elbow method, for this.  You will see that there are many others, however most do not work on image datasets as the calculations cannot be done on a desktop computer due to the sheer size.  It can also be a good idea to look at the clusters in more than one way and so I have added a dendrogram to the end of the script.

I hope that this will promote discussion and collaboration, but could also be a resource for students.

Best wishes

Gareth

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Post processing and data analysis
« Reply #1 on: September 27, 2016, 11:25:13 AM »
I am not sure where this discussion should be but I thought it would be nice to have a section on post processing data analysis and possibly automation of these tasks.  I am sure we all have scripts and other programmes we use to look at the data.  Ben has started this with his postings of some scrips:

Beam diameter:
http://probesoftware.com/smf/index.php?topic=775.msg4814#msg4814

Surfer script to convert grid images for imagej:
http://probesoftware.com/smf/index.php?topic=739.msg4816#msg4816

Searchable Excel L-value table:
http://probesoftware.com/smf/index.php?topic=537.msg2996#msg2996

In this spirit I have attached some of my code I use in R to look at k-means clustering and plotting of data.  A lot of this is fairly basic but it will save others time when trying to input and output data using R https://www.r-project.org/.

Within here is k-mean clustering, already part of PfE but also cluster validation, this is a tricky subject but is important when determining what k is in k-mean clustering.  Currently I have no clear answer as how best to determine k but there is an excellent examples here: http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters.  I personally, as you will see from the script attached, use the Within groups sum of squares, the elbow method, for this.  You will see that there are many others, however most do not work on image datasets as the calculations cannot be done on a desktop computer due to the sheer size.  It can also be a good idea to look at the clusters in more than one way and so I have added a dendrogram to the end of the script.

I hope that this will promote discussion and collaboration, but could also be a resource for students.

Hi Gareth,
This is really informative and helpful, but also very heady stuff!   :o

It's going to take me a while to digest it all!
john
The only stupid question is the one not asked!

Gareth D Hatton

  • Professor
  • ****
  • Posts: 51
Re: Post processing and data analysis
« Reply #2 on: September 29, 2016, 09:12:05 AM »
After a conversation with Ben Buse I have had a crack at loading .grd files into R and have the following which was suprisingly simple

Code: [Select]
library(rgdal)
grd <- readGDAL("C:/Filepath/Sample_imageNo_WDS1_Al_TAP__Quant.grd")
writeGDAL(grd, fname = "C:/Filepath/file.tif", drivername = "GTiff", type = "Float32")

These files are readable by ImageJ.

What I really want is an automated way of getting these images into an RGB file in R.

I have had a crack at it using the best tool (Google) and come up with:

Code: [Select]
#import tiffs
library(raster)
AAl <- "C:/Filepath/AAl.tif"
ACe <- "C:/Filepath/ACe.tif"
ABa <- "C:/Filepath/ABa.tif"
rgbRaster <- stack(AAl, ACe, ABa)
plotRGB(rgbRaster,r=3,g=2,b=1, scale=800, stretch = "Lin")

The problem I now have is that these should be square images and they are not, as can be seen in the attached image.  I cannot find anything on the internet to help, any one got any ideas?  I suspect it is to do with the coordinates somehow...
« Last Edit: September 29, 2016, 05:09:44 PM by John Donovan »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Post processing and data analysis
« Reply #3 on: September 29, 2016, 05:11:29 PM »
The problem I now have is that these should be square images and they are not, as can be seen in the attached image.  I cannot find anything on the internet to help, any one got any ideas?  I suspect it is to do with the coordinates somehow...

Could it be a page size (orientation) issue?  Can you specify "Landscape" mode and see if it elongates the image in the other direction?
john
The only stupid question is the one not asked!

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #4 on: October 03, 2016, 04:38:07 AM »
Within here is k-mean clustering, already part of PfE but also cluster validation, this is a tricky subject but is important when determining what k is in k-mean clustering.  Currently I have no clear answer as how best to determine k but there is an excellent examples here: http://stackoverflow.com/questions/15376075/cluster-analysis-in-r-determine-the-optimal-number-of-clusters.  I personally, as you will see from the script attached, use the Within groups sum of squares, the elbow method, for this.  You will see that there are many others, however most do not work on image datasets as the calculations cannot be done on a desktop computer due to the sheer size.  It can also be a good idea to look at the clusters in more than one way and so I have added a dendrogram to the end of the script.


Hi Gareth,

Am I right in saying that with k-means clustering the trick is to identify the cluster centers - and that this can either be done with an algorithm as in PFE (where you specify the number of clusters) or can you also tell it the cluster centers. For an unknown material you want k-means to identify the clusters and run without any prior knowledge - which PFE is good at. For K-means clustering I think the outcome of the result is largely determined by the identification of the initial clusters. I wondered if you already know the clusters or phases present could you specify the clusters and then run the k-means clustering to identify the phases within the sample.

Many rock samples are complex from the algorithm but simple to the geologist. You get mixed analyses from grain boundaries, you get phases that are too small at the analytical conditions and you get phases which occupy a small area proportion (proportion of the dataset) but are crucial for interpretation etc - so with more clusters you don't always get the subdivisions your looking for. But the geologist knows what phases to look for or should do! Is the k-means cluster identification weighted towards the high abundant phases.

Thanks

Ben
« Last Edit: October 03, 2016, 08:17:31 AM by John Donovan »

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #5 on: November 28, 2016, 07:41:53 AM »


The problem I now have is that these should be square images and they are not, as can be seen in the attached image.  I cannot find anything on the internet to help, any one got any ideas?  I suspect it is to do with the coordinates somehow...

Hi Gareth,

I've finally got round to checking. And using either plot or levelplot I get the correct aspect ratio (Greyscale is tif from probeimage).



On the left is levelplot which gives the nice image - and its not flipped.

The code was

library(rgdal)
library(raster)
setwd('O:\\Documents\\PFE_Data\\admin\\PFE_1-5')
grd=readGDAL('1-5_00004_WDS1_Ca_PETJ__Quant.grd')
rgrd<-raster(grd)
plot(rgrd,axes=T,asp=1,col=heat.colors(256))

png('plot.png')
plot(rgrd,axes=T,asp=1,col=heat.colors(256))
dev.off()

library(rasterVis)
levelplot(rgrd,margin=FALSE,ylim=c(ymax(rgrd),ymin(rgrd)),xlim=c(xmax(rgrd),xmin(rgrd)))

png('levelplot.png')
levelplot(rgrd,margin=FALSE,ylim=c(ymax(rgrd),ymin(rgrd)),xlim=c(xmax(rgrd),xmin(rgrd)))
dev.off()

I did find when extracting the data for points plotted on the map - I needed to set the coordinate system so that I could specify the radius of the points extracted.

« Last Edit: April 14, 2020, 12:18:19 PM by John Donovan »

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #6 on: April 23, 2018, 03:09:55 AM »
Hi,

I had a request regarding the code I used to generate principle component maps in the paper "Evaluating X-Ray Microanalysis Phase Maps Using Principal Component Analysis" https://doi.org/10.1017/S1431927618000090, which addresses the questions raised above. Principal component maps are a great way of checking whether the phase mapping algorithm has done a good job at characterising the sample. - Here's the method I used for creating an RGB image of the first 3 principle components.

Not very elegant

1st I converted the maps into x,y,intensity text files, by loading into imagej and saving as xy coordinates. These I imported into imagej. But you can import grid files into imagej see posts above. Using the oxide map files is best (saves converting to oxide).

Code: [Select]
MapTi<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS5_Ti_PETL__Quant.txt",sep="\t")
MapSi<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS1_Si_TAP__Quant.txt",sep="\t")
MapNa<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS2_Na_TAP__Quant_2.txt",sep="\t")
MapCa<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS3_Ca_PETH__Quant.txt",sep="\t")
MapFe<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00402_WDS4_Fe_LIFH__Quant.txt",sep="\t")
MapMg<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS1_Mg_TAP__Quant.txt",sep="\t")
MapAl<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS2_Al_TAP__Quant.txt",sep="\t")
MapK<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS3_K_PETH__Quant.txt",sep="\t")
MapMn<-read.table("O:\\Documents\\R project\\kNN_georock\\xenolithmap\\xenolith_specA_00403_WDS4_Mn_LIFH__Quant.txt",sep="\t")
Maps<-cbind(MapNa[,3],MapMg[,3],MapAl[,3],MapSi[,3],MapK[,3],MapCa[,3],MapTi[,3],MapFe[,3],MapMn[,3])
pcaMaps<-prcomp(Maps)
xyp<-matrix(nrow=length(pcaMaps$x[,1]),ncol=3)
xyp[,1]<-MapCa[,1]
xyp[,2]<-MapCa[,2]
xyp[,3]<-pcaMaps$x[, 1]
write.table(xyp,"O:\\Documents\\R project\\kNN_georock\\xenolithmap\\testpca_1.txt",sep="\t")
xyp[,3]<-pcaMaps$x[, 2]
write.table(xyp,"O:\\Documents\\R project\\kNN_georock\\xenolithmap\\testpca_2.txt",sep="\t")
xyp[,3]<-pcaMaps$x[, 3]
write.table(xyp,"O:\\Documents\\R project\\kNN_georock\\xenolithmap\\testpca_3.txt",sep="\t")

Then in surfer, Grid from data, then save grid file as img, then import into imagej and create stack, composite image

Ben
« Last Edit: April 23, 2018, 03:23:36 AM by Ben Buse »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Post processing and data analysis
« Reply #7 on: April 23, 2018, 08:29:05 AM »
Hi Ben,
Let me make sure I understand what you're doing.

So you're taking the oxide wt.% results from CalcImage, then you're converting them into phases using PCA, then you are taking the first three PCA components and displaying them as a composite RGB?

Is that correct?
john
« Last Edit: April 23, 2018, 08:32:24 AM by Probeman »
The only stupid question is the one not asked!

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #8 on: April 26, 2018, 03:47:08 AM »
Hi John,

No - I convert the maps into phases using either KNN (k-nearest neighbour) or K-means clustering. Then I do principle component analysis, and plot the first 3 principle components (PC) as a RGB-PC map to graphically show the chemical variation. Then I can compare RGB PC map to the phase map and see if the phase map accurately identifies the chemical variation, or does it not identify minor phases, which can be seen on the RGB PC map. The idea is to have a method of accessing how well the phase classification algorithm performs on a sample.

Ben

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Post processing and data analysis
« Reply #9 on: April 26, 2018, 12:54:29 PM »
Hi John,

No - I convert the maps into phases using either KNN (k-nearest neighbour) or K-means clustering. Then I do principle component analysis, and plot the first 3 principle components (PC) as a RGB-PC map to graphically show the chemical variation. Then I can compare RGB PC map to the phase map and see if the phase map accurately identifies the chemical variation, or does it not identify minor phases, which can be seen on the RGB PC map. The idea is to have a method of accessing how well the phase classification algorithm performs on a sample.

Ben

Hi Ben,
OK, that makes sense.  Have you ever tried our modified K-means clustering method in CalcImage as described here:

http://probesoftware.com/smf/index.php?topic=1071.0

How does it compare to what you're doing, at least in the first steps of your procedure?
john
The only stupid question is the one not asked!

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #10 on: April 30, 2018, 04:39:20 AM »
Hi John,

Yes that's what spurred on the investigation, as many papers show k-means clustering works well were you have spherical clusters of similar size, problems come in when you deviate from this, particularly where your using initial random cluster positions, with the result strongly dependant on the initial cluster positions. What I found was it did a good job for the large (abundant) phases, but the small & sparse phases where sometimes not recognised - I did try it on a difficult sample! Increasing the number of clusters tends to subdivide the large phases rather than identifying the sparse phases. Improvements where found when initial cluster positions were specified, or maximum intensity was used for initial positions, or KNN with a subset of data. The other point is the phase classification need not be the end of the process, further data processing can be done subdividing phases, or examining chemical variation within phases.

I tried the sample with other propriety phase algorithms and they also had problems in identifying the minor phases whilst maintaining the image resolution. The bottom line being no algorithm is 100% fool proof. Which comes back to making a critical assessment of the results of phase classification

Ben
« Last Edit: April 30, 2018, 11:04:37 AM by Ben Buse »

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #11 on: May 01, 2018, 08:40:03 AM »
Hi everyone,

A separate post-processing topic , averaging a wide line, why do microscopy software use mean?

In example below - using ImageJ. Imagine a noisy x-ray map with counts increasing upwards to interface. To reduce noise I want to do an average profile, but the mean is strongly affected by inclusions



If I take the average wide line



The profile, is strongly affected by the "inclusions" because the mean is used.



I've tried thresholding and setting inclusions to NaN, but the profile won't average/ignore NaN instead it gives breaks in profile

Now I have a tedious code which I paste into R project, which extracts a series of individual lines along an interface, (adjusting for interface angle) and calculating the median. This works well, I wondered what other people's experience was.
« Last Edit: April 13, 2020, 10:18:28 PM by John Donovan »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Post processing and data analysis
« Reply #12 on: May 01, 2018, 11:14:42 AM »
Hi Ben,
I can't help you with ImageJ, but did you try running any of the "strip" map averaging methods in CalcImage? 

http://probesoftware.com/smf/index.php?topic=41.msg897#msg897

http://probesoftware.com/smf/gallery/1_25_11_17_4_27_57.png

You can specify horizontal or vertical strips of a specified micron width and the Surfer software crunches through the map pixels by row (vertical) or by column (horizontal) and averages the data values.

I'm sure I'm the one missing something, but just wondering if you had somehow missed this feature in CalcImage...
john
The only stupid question is the one not asked!

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #13 on: May 02, 2018, 02:37:51 AM »
Hi John,

My main point was when averaging do you use mean or median. (For the actual project I don't use calcimage, as i'm dealing with subareas in a map that are non orthogonal)

Thanks

Ben

Ben Buse

  • Professor
  • ****
  • Posts: 488
Re: Post processing and data analysis
« Reply #14 on: May 02, 2018, 04:11:58 AM »
So here's the R code (for two elements here Zn and Si maps, provided interface is slopping in correct direction, just off from horizontal, with right of image lower) Extracts a box 600 um wide, 250um high., for pixel spacing of 1um. (To change from 600um wide, change 1-600 lines. To change from 250 high, change from 250 for vertical start line). Units in mm.

Attached is a generalised code where width and height are set using boxheight and boxwidth. Also can do either direction from horizontal. The automatic script .r file which guides through.

Initialize and read maps from .grd files

Code: [Select]
library(rgdal)
library(raster)
library(rasterVis)

#enter directory
setwd('')

enter Zn & Si map grid file name
grdZn<-readGDAL('')
grdSi<-readGDAL('')
rgrdZn<-raster(grdZn)
rgrdSi<-raster(grdSi)

calculate angle of interface, by clicking on two points along interface, then right click and end

Code: [Select]
windows.options(width=20, height=10)
plot(rgrdSi,asp=1)
#grid(3,5)
bs<-click(rgrdSi,xy=T) # click on boundary in two places to determine angle
xs<-max(bs[,1])-min(bs[,1])
ys<-max(bs[,2])-min(bs[,2])
xs100<-(.001/xs)*xs #.0o1 is 1um - so move 100 pixels in x
ys100<-(.001/xs)*ys


a<-max(bs[,1])-min(bs[,1])
o<-max(bs[,2])-min(bs[,2])
sang<-((atan(o/a))*180)/pi
sang_rad<-atan(o/a)


determine start position of extraction box. [i.e. the first line to extract, length of line 250um, orientated perpendicular to interface]

Code: [Select]
#determine vertical second point
plot(rgrdSi,asp=1)
st<-click(rgrdSi,xy=T) # click on start point
ed<-cbind(st[,1],st[,2]+.25)
st<-cbind(st[,1],st[,2])
#op<-tan(sang_rad)*0.25 #shift along slope for vertical of .25 or 250um
hop<-sin(sang_rad)*0.25#shift along slope for vertical of .25 or 250um
vop<-tan(sang_rad)*hop
#ed<-cbind(ed[,1]+op,ed[,2]-vop)
ed<-cbind(ed[,1]+hop,ed[,2]-vop)
#h<-sqrt(xs^2+ys^2) #pythagoras calculate hypotenuse which #set to step
#hxs<-(1/h)*xs
#hys<-(1/h)*ys
#ed<-cbind(ed[,1]-(hxs*op),ed[,2]-(hys*op))
#check
a<-ed[1,2]-st[1,2]
o<-ed[1,1]-st[1,1]
((atan(o/a))*180)/pi
90-((atan(o/a))*180)/pi
#create line
fL<-rbind(st,ed) #create first line
sfL<-SpatialLines(list(Lines(Line(fL),ID="a")))
p<-(levelplot(rgrdSi,at=seq(minValue(rgrdSi),5,length=100),margin=FALSE,ylim=c(ymax(rgrdSi),ymin(rgrdSi)),xlim=c(xmax(rgrdSi),xmin(rgrdSi)),main=list("Si Wt %",cex=0.8)))
p+layer(sp.lines(sfL,col="red",lwd=1))




extract data from box area, as series of individual lines

Code: [Select]
lxxyyc<-fL
for(i in 1:600){
assign(paste("L",i,sep="."),cbind(lxxyyc[,1]-xs100,lxxyyc[,2]+ys100))
assign(paste("spL",i,sep="."),SpatialLines(list(Lines(Line(eval(parse(text=paste("L",i,sep=".")))),ID="a"))))
lxxyyc<-cbind(lxxyyc[,1]-xs100,lxxyyc[,2]+ys100)
# to check extracting correct area, comment out following lines, to speed up for test extract
assign(paste("eLSi",i,sep="."),extract(rgrdSi,eval(parse(text=paste("spL",i,sep=".")))))
assign(paste("eLZn",i,sep="."),extract(rgrdZn,eval(parse(text=paste("spL",i,sep=".")))))
}

# angle between lines
o<-fL[1,1]-L.1[1,1]
a<-L.1[1,2]-fL[1,2]
((atan(o/a))*180)/pi
-90-((atan(o/a))*180)/pi


# plot lines on map

LTop<-rbind(L.1[1,],L.600[1,])
LBottom<-rbind(L.1[2,],L.600[2,])
SpTop<-SpatialLines(list(Lines(Line(LTop),ID="a")))
SpBottom<-SpatialLines(list(Lines(Line(LBottom),ID="a")))

png("Position of line extraction_box.png",height=512,width=1024,units="px")
p<-(levelplot(rgrdSi,at=seq(minValue(rgrdSi),5,length=100),margin=FALSE,ylim=c(ymin(rgrdSi),ymax(rgrdSi)),xlim=c(xmin(rgrdSi),xmax(rgrdSi)),colorkey=F))
p<-p+layer(sp.lines(eval(parse(text=paste("spL",1,sep="."))),col="red",lwd=3))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",100,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",200,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",300,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",400,sep="."))),col="red",lwd=1))
#p<-p+layer(sp.lines(eval(parse(text=paste("spL",500,sep="."))),col="red",lwd=1))
p<-p+layer(sp.lines(eval(parse(text=paste("spL",600,sep="."))),col="red",lwd=3))
p<-p+layer(sp.lines(SpTop,col="red",lwd=3))
p<-p+layer(sp.lines(SpBottom,col="red",lwd=3))
p
dev.off()

extracted lines to stack; average lines using median; save results

Code: [Select]
eLSi_stack<-eLSi.1[[1]]
for(i in 2:600){
eLSi_stack<-rbind(eLSi_stack,eval(parse(text=paste("eLSi",i,sep=".")))[[1]][])
}

eLZn_stack<-eLZn.1[[1]]
for(i in 2:600){
eLZn_stack<-rbind(eLZn_stack,eval(parse(text=paste("eLZn",i,sep=".")))[[1]][])
}



library(matrixStats)

#save extracted data
avg_res_Si<-cbind(seq(1,length(eLSi_stack[1,]),1),colMedians(eLSi_stack),colMeans(eLSi_stack))
write.table(avg_res_Si,'Si line extract median and mean.csv',row.names=F,sep=",")
avg_res_Zn<-cbind(seq(1,length(eLSi_stack[1,]),1),colMedians(eLZn_stack),colMeans(eLZn_stack))
write.table(avg_res_Zn,'Zn line extract median and mean.csv',row.names=F,sep=",")

write.table(eLSi_stack,'Si 600 lines extracted.csv',sep=",")
write.table(eLZn_stack,'Zn 600 lines extracted.csv',sep=",")

plot results as graph and save

Code: [Select]
#view
sy <-250
sx<-5
sxm<-1
windows.options(width=20, height=10)

plot(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),pch='',xlim=c(-0.3,sx),ylim=c(0,sy),ylab="distance (microns)",xlab="(wt. %)",cex.lab=1.5,cex.axis=1.5)
lines(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),col="red",lwd=3)
lines(colMedians(eLZn_stack),seq(1,length(eLSi_stack[1,]),1),col="blue",lwd=3)
legend(sx-sxm,sy,legend=c("Si","Zn"),col=c("red","blue"),lty=1:1,cex=1.5,lwd=3)

#save
png("Zn Si extract lines horiz.png",height=512,width=1024,units="px")
plot(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),pch='',xlim=c(-0.3,sx),ylim=c(0,sy),ylab="distance (microns)",xlab="(wt. %)",cex.lab=1.5,cex.axis=1.5)
lines(colMedians(eLSi_stack),seq(1,length(eLSi_stack[1,]),1),col="red",lwd=3)
lines(colMedians(eLZn_stack),seq(1,length(eLSi_stack[1,]),1),col="blue",lwd=3)
legend(sx-sxm,sy,legend=c("Si","Zn"),col=c("red","blue"),lty=1:1,cex=1.5,lwd=3)
dev.off()
« Last Edit: May 03, 2018, 08:09:03 AM by Ben Buse »