Posted on Leave a comment

3 Steps to prune Mongo DB in Unifi Controller

You will need two session of terminal to repair, the first session to run the mongo daemon and the second session to prune the database. You will need the mongo_prune_js.js.

Step 1. Login to the first session of the Unifi Controller

Once you login successfully, Stop the unifi service using the following command

service unifi stop

make sure there is no Mongodb session using the following command

ps -aux | grep mongo

Run the mongodb using the following command

sudo mongod --port 27117 -dbpath /usr/lib/unifi/data/db -smallfiles -logpath /usr/lib/unifi/logs/server.log -journal

you should see something similar to the following window:

start mongo db

Step 2. Login to the second session of the Unifi Controller

Download the pruning script, if you don’t have it already. Edit the file using the following command

vi mongo_prune_js.js

and make sure the following line is there

var days=0;
var dryrun=true;

Then execute the pruning script as below:

sudo /usr/bin/mongo --port 27117 < mongo_prune_js.js

You should see something like the following when it finished pruning

Mongo db pruning

Once it is success you need to run it again but this time edit the mongo_prune_js.js with the following

var days=0;
var dryrun=false;

When it is running completely you should see something similar to the following:

Step 3. run the unifi service

Jump back to the first session, and stop the running mongo db using Ctrl + C. You should be back to your command prompt now.

Start the unify server using the following:

service unifi start

If all goes well you should see the following message

Unifi start service

Double check the log files to ensure that the unifi service started successfully using the following command

tail -f /usr/lib/unifi/logs/mongod.log
tail -f /usr/lib/unifi/logs/server.log

You can also check that your Unifi web interface have started successfully using the following

https://yourunifiipaddress:8443

If you encounter problem pruning because you are running out of space, you can find the files that are big using the following command

find / -type f -size +20M -exec ls -lh {} \; | awk '{ print $NF ": " $5 }'

I hope this post have saved you time and frustration.

Posted on Leave a comment

Connecting IoT Sensors data to Node-RED

This is the continuation of the Temperature sensor project in the previous post. The concept is to allow the data from sensors (temperature, motion) can be displayed in Apple Homekit, so that the user can interact with the information and control the IoT connected devices (Lights, Fan, etc). It is best described in the following picture.

The following instruction shows how to install Node-RED on a linux computer running Debian OS.

sudo npm install -g --unsafe-perm node-red

You will also need to install the mosquitto MQTT message broker, here are the command required

sudo apt-key add mosquitto-repo.gpg.key
sudo wget http://repo.mosquitto.org/debian/mosquitto-jessie.list
sudo apt-get update
sudo apt-get install mosquitto

You will also need the latest Python library, so grab them using the following instruction

sudo apt-get install python-dev

Test the installation. In this example I was using Linux Debian, so typing the command node if you get the following in the command prompt, that means the installation is successful. So then you can exit node by typing .exit command in the prompt >.

If all goes well, you can run the node-red command in the command prompt. You should get the following message. This shows that the node-red is now running at http://127.0.0.1:1880.

Node-RED Settings

Node-RED setting file called settings.js, on Linux it is located in the /usr/lib/node-modules/node-red folder. You will have another settings.js file in the .node-red folder in your home folder. This setting will be loaded by default.

Creating the flow in Node-RED

Now that you have a running Node-RED, it is time to create the flow. In this example we will create a simple flow to read temperature posted by our ESP8266. Let’s start by firing up your favourite browser and point to the following URL: http://127.0.0.1:1880/

You will be presented with a blank screen similar to the following picture. Now to start creating a flow, drop an “Inject” node from input section. We will use this as a trigger to get the temperature reading. Once you dropped it in, double click to set the property. We call the node “timestamp” and we set the interval to repeat every 4 minutes.

The next step is to connect this with an “http” node, so drop an “http” node and configure this as http GET to call a server side script in the webserver. What the script needs to return is the temperature in JSON format as below:

{"CurrentTemperature":25}

So my data_store2.php script does exactly that, as shown in the following code:

 /* readtemperature file from temp.txt file    return the value back in JSON format for HomeKit  */ 
$theparam = $_GET;
$file = './temp.txt';
$temperature = file_get_contents($file);
echo '{"CurrentTemperature":'.$temperature.'}';

Now the final step is to connect to the “Homekit” node from the Advance nodes menu. Once you drop the “Homekit” node, you can double click to configure the property as below.

Once all had been connected, it is time to deploy the node. You can do this by clicking on the “Deploy” button at the top of the Node-RED window. You will need to click the “Deploy” button whenever you make any changes to the node. Sometime the deployment might stop the Node-RED server, so you just have to run the node-red command again in the command prompt.

If all goes well, you can now test this node by clicking on the button next to “timestamp” node, the temperature should be read from the webserver and displayed in Homekit, similar to the following picture.

That conclude this session on how to configure the Node-RED to work with our temperature sensor data from ESP8266. Please let me know if you have any questions related to this and don’t forget to subscribe for update on the similar projects. The next session we are going to connect this to the Apple homekit in IPhone or IPad.

Posted on Leave a comment

Download Microsoft Sharepoint List Attachments using Powershell script

I stumble across a problem when trying to download attachment from the Sharepoint List. The list have more than 50,000 rows, the problem with the big list is Windows explorer is not able to display the list in Explorer view, so there is a need to use the Powershell script to download all the attachment programmatically.

Step 1. Make sure you have the Sharepoint Client dll required as specified in the following code


[void][Reflection.Assembly]::LoadFrom("$env:CommonProgramFiles\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll")
[void][Reflection.Assembly]::LoadFrom("$env:CommonProgramFiles\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll")

Step 2. Define the sharepoint site and the list library

The sharepoint site and the library is defined using the following $webUrl variable and $library respectively, also don’t forget to specify the local folder where the files will be downloaded.

$webUrl = "http://website.com/sites/sharepointsite" 
$library = "SharepointLibrary"
Local Folder to dump files
$tempLocation = "C:\temp\"

Step 3. Define how many rows the CamlQuery should return on each iteration

The beauty of the PowerShell Script is that you can specify how many rows to return each time, so the script will not have a problem recursively going through a big list that are over 50,000 and being able to download all the attachment. The following code shows that we are limiting the query to return 3000 rows.

$camlQuery = New-Object Microsoft.SharePoint.Client.CamlQuery $camlQuery.ViewXml ="<View> <RowLimit>3000</RowLimit></View>"

Step 4. Define the folder structure to hold all the attachment to be downloaded

The following code is using the combination of Title and ID as the folder name to store the attachment. It also check whether the folder already exists prior to creating a new one.


    $folderName=$listItem["Title"]+"_"+$listItem["ID"]
    $destinationfolder = $tempLocation + "\"+ $folderName 

     #check if folder is exist or not, if not exist then create new
  if (!(Test-Path -path $destinationfolder))        
   {            
     $dest = New-Item $destinationfolder -type directory      
     Write-Host "Created Folder with Name:" $folderName    
   }

The following is the full code to download the list attachments and put them in the local folder. The credential used is the user credential where the script is executed, that means the login user will need to have access to the SharePoint Site and able to download the attachement.


[void][Reflection.Assembly]::LoadFrom("$env:CommonProgramFiles\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll")
[void][Reflection.Assembly]::LoadFrom("$env:CommonProgramFiles\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll")
Clear-Host
#$cred = Get-Credential "user@microsoft.com"
#$credentials = New-Object Microsoft.Sharepoint.Client.SharePointOnlineCredentials($cred.Username, $cred.Password)
$webUrl = "http://website.com/sites/sharepointsite"

$clientContext = New-Object Microsoft.Sharepoint.Client.ClientContext($webUrl)
Write-Host "Connecting To Site: " $webUrl   

 $username = "$env:USERDOMAIN\$env:USERNAME"

$library = "SharepointLibrary" 
#Local Folder to dump files
$tempLocation = "C:\temp\"    

$global:web = $clientContext.Web;
$global:site = $clientContext.Site;

$clientContext.Load($web)
$clientContext.Load($site)

$listRelItems = $clientContext.Web.Lists.GetByTitle($library)

$clientContext.Load($listRelItems)
$clientContext.ExecuteQuery();
Write-Host "list item count " $listRelItems.ItemCount

$camlQuery = New-Object Microsoft.SharePoint.Client.CamlQuery
$camlQuery.ViewXml =" 3000"
 $listCollection = New-Object System.Collections.Generic.List[string] 
 $count = 0
Do {
$allItems=$listRelItems.GetItems($camlQuery)
$clientContext.Load($allItems)
$clientContext.ExecuteQuery()
$camlQuery.ListItemCollectionPosition = $allItems.ListItemCollectionPosition
foreach ($listItem in $allItems)
 {
    $folderName=$listItem["Title"]+"_"+$listItem["ID"]
    $destinationfolder = $tempLocation + "\"+ $folderName 

     #check if folder is exist or not, if not exist then create new
  if (!(Test-Path -path $destinationfolder))        
   {            
     $dest = New-Item $destinationfolder -type directory      
     Write-Host "Created Folder with Name:" $folderName    
   }
    $clientContext.load($listItem)
    $clientContext.ExecuteQuery();

    $attach = $listItem.AttachmentFiles
    $clientContext.load($attach)
    $clientContext.ExecuteQuery();
    if($attach -ne $null){
        Write-Host "No of attachment:" $attach.Count
        foreach ($attachitem in $attach){
            Write-Host "Downloading Attachements started: "   $attachitem.FileName
            $attachpath = $webUrl + "/Lists/"+ $library + "/Attachments/" + $listItem["ID"] + "/" + $attachitem.FileName
            Write-Host "path: " $attachpath 
         
            $path = $destinationfolder + "\" + $attachitem.FileName
            Write-Host "Saving to the location:"  $path

            $siteUri = [Uri]$attachpath
            $client = new-object System.Net.WebClient
            $client.UseDefaultCredentials=$true
            
            try{
                  $client.DownloadFile($attachpath, $path)
                  $client.Dispose()
            } catch{
                write-error "Failed to download $url, $_ "
            }

        }
    }else {
     Write-Host   "For above current item don't have any attachments" 
    }
  }
Write-Host " List item" $count
$count++
} while ($camlQuery.ListItemCollectionPosition -ne $null)
     Write-Host   "Script execution done !" 

Please let me know if the above script is of useful to you and don’t forget to share or subscribe for more frequent update to the similar topic. You can also drop me a line or questions if you have any.

Posted on Leave a comment

4 Steps to download Microsoft Sharepoint Document Library recursively

I stumble across this problem when we try to decommissioning Microsoft Sharepoint. We had a huge document library and it is not possible to copy them from explorer view, so the solution is to use PowerShell script to do this automagically.

Step 1. Define the DLL that is required.

This is done through the following code snippets. It is crucial to have the 2 DLL to allow the copy function to work. The script will use the credential of the user login into the machine and executing the script. This removes the complexity having to enter the sharepoint credential into the script.

# Load the SharePoint 2013 .NET Framework Client Object Model libraries. # 
[void][Reflection.Assembly]::LoadFrom("c:\Microsoft.SharePoint.Client.dll")
[void][Reflection.Assembly]::LoadFrom("c:\Microsoft.SharePoint.Client.Runtime.dll")

Step 2. Define the sharepoint site URL and the Document Library repository

You can simply enter the sharepoint URL by replacing the following $serverURL variable. Enter the document library by replacing the $DocumentLibrary variable and don’t forget to define the destination folder.

$serverURL = “http://sharepoint.url/sites/sitename”
$destination = "C:\temp\"
$DocumentLibary = "Document Library Name"

Step 3. Choose whether you only want specific folder to be downloaded from the Document Library

Change the folder name that you are interest in downloading, in the following example we are only interested in downloading folder “Payments” and all the folder underneath it.


function Parse-Lists ($Lists)
{
$clientContext.Load($Lists)
$clientContext.Load($Lists.RootFolder.Folders)
$clientContext.ExecuteQuery()
    
    foreach ($Folder in $Lists.RootFolder.Folders)
        {
            if ($Folder.name -eq "Payments"){   #onlydownload selected folder
                recurse $Folder
            }
        }

}

Step 4. Execute the script via PowerShell window or from Command line.

To execute the script via command line you can execute the following Powershell command, with the assumption the name of the powershell script is “scriptname.ps1”

C:\Powershell.exe scriptname.ps1

Here are the full script to download the Sharepoint Document library, be careful the script will download the entire document library recursively, so please make sure you check Step 3 above. With great power comes great responsibility.

# Load the SharePoint 2013 .NET Framework Client Object Model libraries. # 
[void][Reflection.Assembly]::LoadFrom("c:\Microsoft.SharePoint.Client.dll")
[void][Reflection.Assembly]::LoadFrom("c:\Microsoft.SharePoint.Client.Runtime.dll")
Clear-Host

$serverURL = “http://sharepoint.url/sites/sitename”
#$siteUrl = $serverURL+"/documents”
$destination = "C:\temp\"
$DocumentLibary = "Document Library Name"
$downloadEnabled = $true
$versionEnabled = $false

# Authenticate with the SharePoint Online site. # 
#$username = ""
#$Password = ""
#$securePassword = ConvertTo-SecureString $Password -AsPlainText -Force  

$clientContext = New-Object Microsoft.SharePoint.Client.ClientContext($serverURL) 
#$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $securePassword) 
#$clientContext.Credentials = $credentials 
if (!$clientContext.ServerObjectIsNull.Value) 
{ 
    Write-Output "Connected to SharePoint Online site: '$serverURL'"
} 


function HTTPDownloadFile($ServerFileLocation, $DownloadPath)
{
#Download the file from the version's URL, download to the $DownloadPath location
    $webclient = New-Object System.Net.WebClient
    $webclient.credentials = $credentials
    Write-Output "Download From ->'$ServerFileLocation'"
    Write-Output "Write to->'$DownloadPath'"
    $webclient.Headers.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f")
    $webclient.DownloadFile($ServerFileLocation,$DownloadPath)
}

function DownloadFile($theFile, $DownloadPath)
{
    $fileRef = $theFile.ServerRelativeUrl;
    Write-Host $fileRef;
    $fileInfo = [Microsoft.sharepoint.client.File]::OpenBinaryDirect($clientContext, $fileRef);
    $fileStream = [System.IO.File]::Create($DownloadPath)
    $fileInfo.Stream.CopyTo($fileStream);
    $fileStream.Close()
}

function Get-FileVersions ($file, $destinationFolder)
{
    $clientContext.Load($file.Versions)
    $clientContext.ExecuteQuery()
    foreach($version in $file.Versions)
    {
        #Add version label to file in format: [Filename]_v[version#].[extension]
        $filesplit = $file.Name.split(".") 
        $fullname = $filesplit[0] 
        $fileext = $filesplit[1] 
        $FullFileName = $fullname+"_v"+$version.VersionLabel+"."+$fileext           

        #Can't create an SPFile object from historical versions, but CAN download via HTTP
        #Create the full File URL using the Website URL and version's URL
        $ServerFileLocation = $siteUrl+"/"+$version.Url

        #Full Download path including filename
        $DownloadPath = $destinationfolder+"\"+$FullFileName
        
        if($downloadenabled) {HTTPDownloadFile "$ServerFileLocation" "$DownloadPath"}

    }
}

function Get-FolderFiles ($Folder)
{
    $clientContext.Load($Folder.Files)
    $clientContext.ExecuteQuery()

    foreach ($file in $Folder.Files)
        {

            $folderName = $Folder.ServerRelativeURL
            $folderName = $folderName -replace "/","\"
            $folderName = $destination + $folderName
            $fileName = $file.name
            $fileURL = $file.ServerRelativeUrl
            
                
            if (!(Test-Path -path $folderName))
            {
                $dest = New-Item $folderName -type directory 
            }
                
            Write-Output "Destination -> '$folderName'\'$filename'"

            #Create the full File URL using the Website URL and version's URL
            $ServerFileLocation = $serverUrl+$file.ServerRelativeUrl

            #Full Download path including filename
            $DownloadPath = $folderName + "\" + $file.Name
                    
            #if($downloadEnabled) {HTTPDownloadFile "$ServerFileLocation" "$DownloadPath"}
            if($downloadEnabled) {DownloadFile $file "$DownloadPath"}

            if($versionEnabled) {Get-FileVersions $file $folderName}
            
    }
}


function Recurse($Folder) 
{
       
    $folderName = $Folder.Name
    $folderItemCount = $folder.ItemCount

    Write-Output "List Name ->'$folderName'"
    Write-Output "Number of List Items->'$folderItemCount'"

    if($Folder.name -ne "Forms")
        {
            #Write-Host $Folder.Name
            Get-FolderFiles $Folder
        }
 
    Write-Output $folder.ServerRelativeUrl
 
    $thisFolder = $clientContext.Web.GetFolderByServerRelativeUrl($folder.ServerRelativeUrl)
    $clientContext.Load($thisFolder)
    $clientContext.Load($thisFolder.Folders)
    $clientContext.ExecuteQuery()
            
    foreach($subfolder in $thisFolder.Folders)
        {
            Recurse $subfolder  
        }       
}


function Parse-Lists ($Lists)
{
$clientContext.Load($Lists)
$clientContext.Load($Lists.RootFolder.Folders)
$clientContext.ExecuteQuery()
    
    foreach ($Folder in $Lists.RootFolder.Folders)
        {
            if ($Folder.name -eq "Payments"){   #onlydownload selected folder
                recurse $Folder
            }
        }

}

$rootWeb = $clientContext.Web
$LibLists = $rootWeb.lists.getByTitle($DocumentLibary)
$clientContext.Load($rootWeb)
$clientContext.load($LibLists)
$clientContext.ExecuteQuery()

Parse-Lists $LibLists

 

Please let me know if the above script is useful, feel free to subscribe to my blog, share this script or ask me any questions related to the script.

Posted on Leave a comment

UniFi network controller failed to start

I was helping a friend to fix his UniFi network controller that failed to start. I didn’t know much about it, and we think this is related to the database mongodb which is full with logs of the events which make the size bloated up.

Here are the steps that I did to fix it.

  1. Stop the Unifi Service using the following command
service unifi stop

2. Repair the database with the following command

mongod --dbpath /usr/lib/unifi/data/db --smallfiles --logpath /usr/lib/unifi/logs/server.log --repair

3. Download the pruning script using WGET

wget https://ubnt.zendesk.com/hc/article_attachments/115024095828/mongo_prune_js.js

4. Perform the test run using the following command

mongo --port 27117 < mongo_prune_js.js

5. If the mongo db is not running you might have to run the following command first

sudo mongod --dbpath /usr/lib/unifi/data/db

6. Once the test run is success you will need to edit mongo_prune_js.js to disable “dryrun”, change the line var dryrun=false;

nano mongo_prune_js.js

7. Prune the database by executing the following command, this will take a while to run depends on the size of events that you have. I had about 2 millions events.

mongo --port 27117 < mongo_prune_js.js

8. If all goes well you should see a couple of OK message like the following

{ "ok" : 1}

9. Change the db and log files permission using the following (NOTE : this is an important step)

chown -R unifi:unifi /usr/lib/unifi/data/db/
chown -R unifi:unifi /usr/lib/unifi/logs/server.log

10. Finally start the unifi service using the following

service unifi start

11. if all goes well you should be able to login to the unifi web interface now.

Posted on Leave a comment

Password Protect PDF document using windows Powershell script

Password protect PDF document can be done using Windows Powershell script. I stumble across a powerful script which allow to password protect PDF document using the script. The requirement that I have is to password protect a list of batch PDF document with a specified password that are different between each document and then sending each password protected PDF document to different individual emails.

So after researching for a while I found the script that does the job, but then I come across another hurdle, how do I get the list of password and the list of pdf, so my solution is to put the list of pdf files along with the password in an excel document, and let the script open the excel find the first pdf document, then read the designated password next to it, and run the script to password protect the file and save the password protected file in a different location.

So hunting around for the powershell script to read excel proven to be fruitful, so I combine both of them into one script. Here are the step by step on what I had done and how to modify this to suit your need. This proven to speed up the manual work by a lot. Not to imagine the time and cost savings that comes along with it.

Step 1. Create the list with location of PDF document and the password that you want to protect the document with.

In picture below, we have the source of document in column B, and the target encrypted PDF file name will be located in the location specified in column C. The password for each document will be based on the information in column D. Save this Excel file in C:\temp\ location and lets call the file Book2.xlsx for this exercise.

Step 2. Open your favourite text editor and paste in the following code, you can also use the “Windows PowerShell ISE“:

[System.Reflection.Assembly]::LoadFrom("itextsharp.dll")

function PSUsing {
 param (
 [System.IDisposable] $inputObject = $(throw "The parameter -inputObject is required."),
 [ScriptBlock] $scriptBlock = $(throw "The parameter -scriptBlock is required.")
 )
 
 Try {
 &$scriptBlock
 }
 Finally {
 if ($inputObject.psbase -eq $null) {
 $inputObject.Dispose()
 } else {
 $inputObject.psbase.Dispose()
 }
 }
}

$xlCellTypeLastCell = 11 
$startRow,$col=2,1

$excel=new-object -com excel.application
$wb=$excel.workbooks.open("c:\temp\Book2.xlsx")

for ($i=1; $i -le $wb.sheets.count; $i++)
 {
 $j=0;
 $sh=$wb.Sheets.Item($i)
 $endRow=$sh.UsedRange.SpecialCells($xlCellTypeLastCell).Row
 $rangeAddress=$sh.Cells.Item($startRow+1,$col).Address() + ":" +$sh.Cells.Item($endRow+1,$col).Address()
 $sh.Range($rangeAddress).Value2 | foreach {
 $contract=$sh.Cells.Item($startRow + $j,$col).Value2
 $filesource = $sh.Cells.Item($startRow + $j,$col+1).Value2
 $filedest = $sh.Cells.Item($startRow + $j,$col+2).Value2
 $dob=$sh.Cells.Item($startRow + $j,$col+3).Value2
 
 New-Object PSObject -Property @{Contract=$contract;Dob=$dob}
 
 $file = New-Object System.IO.FileInfo $filesource
 $fileWithPassword = New-Object System.IO.FileInfo $filedest
 $password = $dob
 PSUsing ( $fileStreamIn = $file.OpenRead() ) { 
 PSUsing ( $fileStreamOut = New-Object System.IO.FileStream($fileWithPassword.FullName,[System.IO.FileMode]::Create,[System.IO.FileAccess]::Write,[System.IO.FileShare]::None) ) { 
 PSUsing ( $reader = New-Object iTextSharp.text.pdf.PdfReader $fileStreamIn ) {
 [iTextSharp.text.pdf.PdfEncryptor]::Encrypt($reader, $fileStreamOut, $true, $password, $password, [iTextSharp.text.pdf.PdfWriter]::ALLOW_PRINTING)
 }
 }
 }
 
 $j++
 }
}
$excel.Workbooks.Close()

 

Step 3. Make sure you have the itextsharp.dll library located in the same location as the PowerShell Script.

In this example it should be located in C:\Temp\. You should have the folder structure that looks similar to the one below.

Step 4. Put the source PDF files as specified in the step 1. Execute the Powershell Script by clicking on the execute button if you are using Windows PowerShell ISE.

Step 5. If everything went well, you should be able to see the execution message as per the picture below.

Step 6. The destination folders specified in excel step 1. should contain all the password protected PDF, as shown below.

Now you can tested these Password Protected PDF files by opening it using the password that you had specified in Step 1.

Please leave me a comments if you like this post. Like it if this help you to save some time in automating some of the boring task.

 

Posted on Leave a comment

Solving “insecure string pickle” in youtube-dl

I am trying to use youtube-dl to download something for my missus, and the error “insecure string pickle” issue is preventing the video being downloaded. After a few hours of searching the internet finally the following blog solved my problem.

The solution is quite simple:

brew install libav ffmpeg rtmpdump

Just to note my system is Hackintosh running el-capitan with homebrew installed.

Posted on Leave a comment

Windows 10 can’t see network drive

I had a problem where my windows 10 can’t see network drive, all the mapped drive can’t be accessed. I had been pulling my hair over this issue. I got a new HUAWEI router (NBN) but somehow this muck up my Windows 10 mapped drive. I had a NAS drive that was mapped in windows 10 and every now and again the Windows 10 Home will no longer able to access the NAS drive. Strangely all the other computers, I had several Mac in the house, they are all working fine connected via the same router. Here are some instruction on how to fix Windows 10 Network file sharing, it works for me, so I hope it will save you some time.

After countless of hours of tweaking and google searching for why windows 10 can’t see network drive, I found the following Method is the most effective to resolve the issue.

 

Step 1. Go to the settings from clicking “Windows” button on your bottom left corner and select settings as shown in the picture.Windows Options

Step 2. Click on the Network & Internet icon

Windows 10 can't see network drive

Step 3. You will see the similar screen to the one below, scroll down until you find the “Sharing option” and click on it.

Network and Internet

Continue reading Windows 10 can’t see network drive

Posted on Leave a comment

TERM support in Screen command

When you get the following error message when trying to start screen

Cannot find terminfo entry for 'xterm-256color'.

you will need to find the which TERM is supported:

ls /usr/share/terminfo/x

this will give you a list of supported TERMs i.e.

xterm
xterm-xfree86

set the environment variable:

export TERM=xterm-xfree86

and then run the screen command

TERMINFO='/usr/share/terminfo/' screen
Posted on Leave a comment

Fixing corrupted Time Machine backup

If you are a MAC user, the following message pop up at times.

“Time Machine completed a verification of your backups. To improve reliability, Time Machine must create a new backup for you.”

This means that:

  • Time machine will delete all your existing backups, and
  • Create a new initial backup

You will loose all your previous backups. So let’s try to fix this.

Step 1. Preparations

Time Machine will save all backups and every piece of meta information into a *.sparsebundle file on your network storage device. So this is the file we need to fix.

In order to access it, you need to connect to your networks device share that contains the backup with Finder. It will hold one or more *.sparsebundle files in its root, one for each Mac that uses this drive with Time Machine.

After that, open a Terminal window and switch to the root (=admin) user with the following command:

sudo su -

Enter your password if you’re asked to do so.

When you see the error message I talked about above for the first time, Time Machine has already flagged your backups *.sparsebundle file as bad. As a first step, we need to undo this:

chflags -R nouchg /Volumes/<name of network share>/<name of backup>.sparsebundle

Make sure to replace <name of network share> and <name of backup> accordingly and give it a while to finish.

Step 2. Mounting your backup

Next, we need to mount your backups *.sparsebundle file to enable your system to run a few checks on it:

hdiutil attach -nomount -noverify -noautofsck /Volumes/<name of network share>/<name of backup>.sparsebundle

his command will return something like this:

/dev/diskX Apple_partition_scheme
/dev/diskXs1 Apple_partition_map
/dev/diskXs2 Apple_HFS

What is important to us is the line containing Apple_HFS or Apple_HFSX(this usually is the last line) which is the device ID followed by the device type. The X will be replaced with a number which will be anywhere between 2 and 6 or even higher. So, you want to take note of the device ID that looks something like /dev/disk2s2 or /dev/disk3s2.

Step 3. Repairing the *.sparsebundle

Now we can finally let OS X do its magic of trying to repair the file. Run the following command after replacing the dummy device ID with your own:

fsck_hfs -drfy /dev/diskXs2

With this kicked of, you may want to go have a coffee or two as this may take anywhere from 15 minutes to multiple hours, depending on the size of your backup and the speed of your network connection.

After the command finished, you’ll either see »The Volume was repaired successfully« or »The Volume could not be repaired«.

Either way, you need to unmount the backup before taking any further steps:

hdiutil detach /dev/diskXs2

Step 4. Finishing touches

If the repair command from the previous step failed, there’s not much you can do about it and your backups are lost for real probably. You should just let Time Machine create a new backup for you.

If it succeeded though, there is a last step we need to take in order to convince Time Machine to keep using the existing backup.

Use the Finder to navigate to the backups *.sparsebundle file and with a right click, choose »Show package contents« to get a look inside. There you’ll find a file called com.apple.TimeMachine.MachineID.plist. Since Time Machine uses this file to mark bad backups, too, we need to modify it slightly.

Open the file with a text editor of your choice and find the line saying

<key>RecoveryBackupDeclinedDate</key>
<date>{any-date-string}</date>

and remove them completely. Next find the lines

<key>VerificationState</key>
<integer>2</integer>

and change them to

<key>VerificationState</key>
<integer>0</integer>

Done. You probably want to eject/unmount the network share from your finder and tell Time Machine to do another backup now.