Laravel & MongoDB – relationships with ObjectId

I’ve used Laravel and jenssegers/laravel-mongodb package for a long time. It’s wonderful thing, because it allows to use MongoDB easily, integrate most of things with Eloquent and Laravel Models, but also offers hybrid relationships with different database drivers. In theory it supports all relationships in MongoDB database, but in practise, there are a lot of potential issues because of current implementation. 

On MonogDB you can use many fields to create relations and use them in $lookups (aggregations) after that, but the most common practise is to use ObjectId – it’s default field type for keys (_id) and of course you can use different, but probably you will use that one. The problem is jenssegers package uses string for all relationships – and it works (but not always), until you want to use custom aggregations and lookups. These is an example of our parent model with some child relationship:

use Jenssegers\Mongodb\Relations\HasMany;

class ParentModel 
{
    public function children(): HasMany
    {
        return $this->hasMany(ChildModel::class, 'parent_model_id');
    }

}

And then you want to add some children to parent model instance:

$child = new ChildModel();
$parentModel->children()->save($child);

What will be the effect? Child record will have parent_model_id field of course, but it will be string field, not ObjectId. It’s fine until you will need to use $lookup – in that case, simple join will NOT work, because you have ObjectId key (_id) in Parent collection and string relation field on Child collection. We will need to prepare additional field using $addFields in Parrent collection before $lookup stage or in Child collection during $lookup pipeline. It’s not efficient way to solve that problem.

So, how to handle with that? Solution is easy and already available on new package development version: skip automatic casting key to string. Unfortunately, looks like it is not maintained actively, and we will wait a bit longer for updated version. But we can add required change right now. Just overwrite ParentModel getIdAttribute to fix that and always return clean value, without any modification:

class ParentModel
{
    public function getIdAttribute($value = null)
    {
        return $value;
    }
}

After that change code to add child:

$parentModel->children()->save($child);

will not use string in parent_model_id field anymore. It will be ObjectId and everything will work correctly: build-in Laravel relationships and also $lookups, without any additional fields.

Any drawbacks? Yes, after such change, you need to remember to cast key if you want to send it to client as a string or use in comparisons:

$parentModel->getKey() === (string)$parentModel->getKey() 
// before change: true
// after change: false, because left is ObjectId instance

// JSON resource:

return [
    'id' => (string) $parentModel->getKey(),

    // or
    'id' => $parentModel->getKey()->toString(),
]

But I think it’s not a big deal – it’s better to always have one type instead of many different in the same field. 

The diary makes no sense to me

I read in a few books that we should write a journal, make some notes about each day, describe our feelings, our ideas, our thoughts. I decided: “challenge accepted” and started writing my own journal at the end of June. The plan was to do that for three months and maybe continue after that. Why three months? Because it’s always a period for my main goals. After two months I decided to stop and it’s a good time to share my experiences. 

So, why did I decide to stop that experiment? Well, I observed that I have a real problem with writing such notes. The best time to do this is evening, after all day and with all that “luggage of feelings”, all problems, all dilemmas, all joys and sorrows. But in my case, it was almost i possible. After work I usually go for a workout, sometimes running, sometimes cycling. I get up early to do some important things before work, so in the evenings, I’m often tired, I also do not want to use my phone or computer again before bedtime. In effect, it was difficult to write notes in the evenings. 

Ok, so I decided to write it in the morning also before work. It was better, but there is another problem – I didn’t know, what about should I write… after sleep, my mind is clear. Usually if I have some dilemmas, I go sleep and after wake up, I have a answer. It caused a lot of feelings that were… outdated? Yeah, I think it’s a good word to describe that. Because of that, such notes completely lost sense. It should be something to organize before sleep, to help my brain work on all information during night, but it was not possible.

The second problem was I didn’t go back to my notes. After two weeks I realized that with only a date as a topic, they are completely “anonymous” and do not give me any feedback. So I also started to put the most important thing in the title like: “07/14/2021: the longest bike ride”. It was much better, but the problem still existed – when should I check notes and what should I do with them? It looked like writing for.. Someone else, after my death. 

Third and last, but very important thing: if I missed a journal in a day, it was important for me and evoked negative emotions – self-blame is very easy to do in such cases, because we do not remember about all other things we had to do. It was just like a non-archived goal… Right now I know, that it’s stupid approach, because goals is a bad idea – better is just system, like “I have system to write daily notes, not specific goal”, but it was for test, to check, what can give me daily diary. 

I do not think a journal is a bad idea – because it’s a great idea. The problem is it will not work for anyone in the same way. I decided to use the phone notes app more often – to just write quick notes Immediately after an event that caused feelings / dilemmas / problems. It will be more realistic, it will be something like a “snapshot of present”, like a photograph of an important moment in life. Right now I’m thinking about how to find some time to organize all the information from a whole day before sleep, because I think it can help us a lot.

Migrating from MacOS to Windows 10

In a few last posts I mentioned that I decided to switch from MacBook and MacOS to Windows. Right now, is an enjoyable time to describe reasons, pros, and cons for that decision. It wasn’t super simple, because I’ve used MacBooks for about 5 years, and I was quite happy with them. But… the last two years made me think about both platforms – right now Apple is behind the competition. Of course, they released new Macs with their M1 chips, it looks interesting, but there are still a lot of especially important limitations, in both hardware and software. At the same time, Microsoft did a lot to convince developers and encourage them to migrate. Maybe I’m wrong, but I bet Microsoft will win in that race: not because of hardware support, they are dependent on Intel and AMD, but because of software support and still, amazing backward compatibility. 

I’ve used MacBook Pro 15” from 2018, but it’s still pretty nice: Intel i7 8gen with 6 cores and 12 threads, 16 GB of RAM, 256 GB NVMe SSD and both, integrated Intel GPU + Radeon Pro enabled if required or when we connect external display. In theory, it should provide a lot of power, enough to work comfortably as a Full-Stack Developer. Unfortunately, it doesn’t. Because of MacOS architecture, Docker is slow, especially in I/O. We use a lot of Docker so… it’s problematic. Other things: I really do not need a Radeon GPU, I do not do anything related to GPU, Metal API etc. Intel GPU and hardware acceleration which it can provide is more than enough for me. Unfortunately, on Mac it’s not possible to connect the external display and still use Intel GPU – Radeon is then enabled, consumes additional energy, and gives off heat. In effect, the whole machine is warmer, louder and CPU performance (particularly important for me) falling, because of throttling. 

My new machine is Dell 5501. It’s not the newest option, but there was an extremely attractive offer – it has Intel i7 9gen (also 6 cores and 12 threads, but it supports some vPro features), 32 GB RAM, 512 GB NVMe SSD and two GPUs: Integrated from Intel and GeForce MX150. I ordered a model without that discrete graphic, but it was broken, and the shop sent me a better version. I do not need that GeForce, but it isn’t a problem: Windows supports nVidia Optimus and uses Intel GPU always. MX150 is activated only when needed, only when I will run some 3D apps, games, CUDA-based calculations etc. Two other nice features: that notebooks have a built-in fingerprint sensor and IR camera, so it’s 100% compatible with advanced biometric and Windows Hello login. It works nice, and I did not observe any issues with it. I needed 3 weeks to migrate, to move my daily work to Windows machine, but right now it’s ok. Let’s talk about pros and cons.

Pros

Better hardware support 

Probably the most important thing is much better hardware support. Windows can work on almost any computer. You can choose an AMD or Intel CPU, you can choose many different video cards, sound cards, disks etc. Of course it generates some issues related to drivers, but right now it looks much different than for example 6-7 years ago. I installed fresh Windows 10 as Windows 7 upgrade on my fiancée notebook and Windows Update found all required drivers. Whow! I didn’t spend time looking for something, like in years before. Today, it just works, without issues. You can also build your own computer, select what exactly you need, for example extremely fast CPU or GPU. It’s not a problem. On Macs, but have limited choice. Very limited, because Apple uses only Intel CPU and M1 chip. They also limit GPUs, because for a few years, they do not support Nvidia cards. Do you want to use CUDA? Sorry, no way. Do you need more RAM or a bigger SSD? No problem, but prices are extremely high. For example in Poland you have to spend an additional 2000 PLN to upgrade Mac memory from 16 GB to 32 GB (just 16 GB more). In normal shops you can buy 64 GB (2x 32 GB) notebooks for a… 1600 PLN. Four times more for less money. It’s an absurd. 

Second thing related to hardware: external ports. Macs right now have 2 or 4 Thunderbolt 3 / USB-C ports. It’s a very, very powerful port, because it can transfer 40 Gb/s, provide audio, video, charge your device etc. It sounds good, but in reality… how can I quickly connect my friend’s pendrive? Or external disk? Or my Android phone? Or maybe TV? It’s a problem, we need a USB-C hub or docking station. Additional dongle with a lot of additional issues – some of them do not provide 85W power charging (but just 40W or 65W), most of them can’t support 4K in 60 Hz and are limited only to 30 Hz. I struggled with that and it’s nothing interesting, I also mean, you have to spend additional money. In new Dell I have some tradicional USB-A ports, HDMI 2.0 (no problem with 4K 60 Hz!), LAN, card reader and also one USB-C with Thunderbolt 3 – is enough for all accessories I have and I do not have to buy any hub or docking station. Simple and clever. 

External displays and antialiasing

It isn’t the same in both worlds. Starting with MacOS… I think High Sierra, Apple changed the text antialiasing method. After that, it started to look pretty bad on non-HiDPI/non-Retina displays. Fonts are not sharp, there are a lot of edges. They mentioned, it was because of performance and the new method uses grayscale antialiasing and should be much faster. I understand them, Microsoft did the same thing on Windows 8 – they abandoned their ClearType antialiasing and started to use grayscale. But on Windows, it still looks fine. In effect, if you want to see a sharp, nice font with an Apple device, you have to spend more money and buy a better display (HiDPI). But it will not solve all issues, it will add new issues! Why? Because of the lack of compatibility and scaling method on Apple systems. I will create a bigger post about that, so here just quick info: scaling on Apple works fine, but only if you use Retina-compatible devices like very expensive LG Ultrafine 5K monitors. If you will buy for example a very popular 4K 27” display and want to use some scaled resolution, let’s say 3072x1728px, your Mac will render it at… 6144x3456px and then downscale to fit your display real resolution. After that, some fonts may be blurry. 

Do you want to connect many monitors to Mac using just one cable? No problem, but you have to use monitors with Thunderbolt 3 – again, expensive LG Ultrafine. What about Windows? Scaling here may be an issue, because every app can look a bit different… but what I mentioned earlier – their antialiasing method is better and you do not have to buy a HiDPI device and scale anything. It’s just simpler. Connecting many monitors? No problem, you can use DisplayPort Daisy Chain and connect for example 3x WQHD or 2x 4K displays using just one cable. It will work with any monitor that supports Daisy Chain – trust me, there are a lot of them in different price ranges. 

Speed – it’s much faster

To be honest – Macbooks and MacOS are fast. But it’s visible in particular on built-in apps or Apple apps: Safari, Notes, iWork (Pages, Numbers, Calc), Mail, Maps. All of them work perfectly fine. But for other, third party apps, it looks much worse. I tried Chrome, Edge, Microsoft Office, Libre Office, Slack, phpStorm, Postman… all of them are not as fast as I think they should be. Opening apps is a slow process, some operations are also slow. Why? I do not know, I think it was better on OS X 10.11 but starting from macOS 10.12 Sierra, everything started to slow down. On the other hand, in Windows 10, everything is blazing fast. Not only native apps, but most third-party also. Everything is just smooth. Animations are not as polished as on MacOS but… it’s a hammer, not a picture. 

WSL – Windows Subsystem for Linux

For me as a Full-Stack Developer, WSL is a real game-changer. It’s a Windows Subsystem for Linux. Right now, you can add to Windows 10 many different Linux distributions and use them tools as native – for example bash, ssh, git, node etc and many others. Their plan also allows users to use Linux GUI apps, so the future looks very interesting. But wait a moment… Apple has terminal tools, has an amazing brew package manager so, what’s wrong with that? Nothing, until you do not need Docker. If you want to use Docker on non-Linux machines without a real Linux kernel, it will create a small virtual machine between the host system and all containers. That “broker” is ok, but causes a lot of slow down, especially on I/O, very important in modern front-end development. Before WSL, it worked in exactly the same way on Windows 10 and Docker was slow, but right now, it can integrate with WSL. In effect it’s almost as fast as native Linux distribution. Second advantage: I do not have to install any tools like git or node on Windows – my WSL distribution provides them, my IDE (phpStorm) can integrate with WSL and everything works out of the box. I think WSL is a great tool for developers. What about Apple M1 and their next chips? It’s problematic, because with Big Sur they added new Virtualization Framework. it should be better, faster… but it isn’t. I did some tests and it’s much slower than Hyperkit. Right now I can’t believe they will improve that without clear evidence. 

More apps & better compatibility

Did you know, you can start with Windows 1 (yes, very old) and upgrade it step by step to Windows 10? Yes, it’s possible and it will work – some apps also will work! Windows has great backward compatibility. Sometimes it’s an issue, because it causes Microsoft can’t force some new solutions, but for users, it is great – there are tons of software, commercial, open source or freeware built in the last decades and still very good. Because of the Win32 API, you can find an app for almost everything within just a few minutes. I still remembered apps I used before migration to Mac and they are still here – most of them in newer versions, but all work perfectly. On Apple, it isn’t always possible. They removed 32-bit apps support, they change internal APIs almost every year and sometimes it can break apps you want to use. Right now I understand why big companies with a lot of computers and endpoint control management systems use Windows – it’s easier and more predictable. Microsoft can’t break backward compatibility and from a stability standpoint, it’s a big advantage. I want to work, not to think about: will my IDE still work after the latest OS upgrade?

Better windows handling

Windows, as the name suggests, focuses on app’s windows. There are few very comfortable tools to work with them. There is aero snap – option to quickly move app’s windows to left, right, top or corners. Windows 10 provides a big enhancement here: assistant and if you move one window to left, it will ask you to maybe put another window to right. Quick ALT + TAB allows you to switch between active apps. It displays not only app icons, but also window preview, so switching is easy. It’s also no problem to just doubleclick on the window tab to make it “full-size” (not fullscreen). Windows key + TAB shortcut will display a virtual desktops organizer with bigger previews and ever simpler method to switch between apps. Do you want to quickly jump to the desktop? No problem, just click the small belt at the end of the taskbar. All these small details make work with windows on Windows very comfortable for me. I just like these solutions. 

It was hard to switch to Mac a few years ago because of the lack of most of them. You will not find something like aero snap here, no assistant and no previews. YOu can snap apps, but it’s limited to exactly two apps and both will work in fullscreen mode. That fullscreen mode is another strange thing – you can’t maximize an app easily (it works for some apps and doesn’t work for other apps), but you can always make it fullscreen – with a hidden dock and top menu. It’s ok for simple things like focus to write blog posts, but it makes work with an IDE harder, because I have to lose time to show the top bar with the app menu every time. I had to buy a BetterTouchTool license to solve some irritating MacOS issues.  One better thing here: tabs support. A lot of MacOS apps support tabs, including system Finder. On Windows, it is app-dependent and unfortunately, Windows Explorer doesn’t support them.

Cons

Worse virtual desktops support

Microsoft started to support virtual desktops with the Windows 10 release. It was a huge novelty, but I didn’t and still does not work as I expected. Why? Because of strange limitations. Everything is ok if we do not use external displays. Then we can create multiple desktops and quickly switch between them. In default, the taskbar is separate for each, it’s strange, but we can quickly change that behavior in the control panel. But what if we want to work with additional displays? Then, if you add a virtual desktop, it is added to all your screens. It’s not a bad option but if you switch desktops on one screen… Windows will switch it also on all screens. And it’s crazy and nonsense. For example, I have a notebook and external display, two virtual desktops on each of them. I want to display a browser with dev tools on the first monitor desktop, IDE on the second monitor desktop, Slack on the first notebook desktop and terminal on the second notebook desktop. Then if I want to switch from browser to IDE on the monitor, it will also change to a virtual desktop on a notebook and I will see a terminal instead of Slack. Stupid. 

There is an option to display any app window on all screen desktops, but it will limit my workflow to use virtual desktops only on the main screen. I think it works in a similar way on Apple by default, but there is an option to have separate virtual desktops for all displays and it makes work very comfortable. Right now I didn’t find any good solution for that, but because of Daisy Chain and better monitor support, it’s not a big issue – I can just use additional display instead of virtual desktops. 

Not polished as MacOS

In my case, the computer is for work and the system is just a basement for all tools I will need to work. But it’s also important to me, that the basement should be… maybe not super elegant, but just clear, consistent, not ugly. MacOS is very consistent. When Apple introduced dark mode, all system apps and a lot of third-party apps supported it without issues. With Big Sur they change the design and it’s visible on the whole system. It’s pretty, it’s consistent, no strange things. On Windows, it looks much worse – right now Windows Explorer still has a few different context menu styles (!) depending on where we click. There is a new control panel, but also a lot of old settings menus from Windows 7, Windows Vista or even earlier. You can activate dark mode, but it will work only in a limited part of apps – not even all of the system apps support it! It’s just mess. They changed the way they wanted to create Windows a few times, and because of backward compatibility, it always generated such issues. There is a lack of consistency – it works, but it may look bad. 

Lack of some apps

I wrote about many apps and great compatibility on Windows, but I have to mention one thing – I miss some apps from MacOS like great (and free, open source!) MySQL client is called Sequel Pro. Another great example is Forklift – SFTP client with OneDrive / Google Drive / Dropbox / S3 / etc. support. Such apps are great, with very good integration with MacOS, polished and reliable. On Windows, I have many replacements, but I have to spend some time choosing the best things for my needs. I think it’s not a real drawback, because such a problem exists in any migration, but must note: finding a great-value app on MacOS is, I think, simpler, maybe because of limited choice, maybe because of different system support and developers goals. I’m not sure, but it’s simpler. 

Amazing app on Macs is Preview. Such a simple thing… just for preview images or PDFs… but with a lot of useful tools. You can simply add annotations to files, you can drag & drop just one page from a big PDF document onto the desktop, you can scan your signature and then use it to sign documents. Another example is Quick Time – maybe not excellent in playing videos, but more than enough to record screen activity. I used it many times to share some things within the company. Windows does not have such apps, you have to spend some time to find alternatives – and all of them will be different. 

It’s not „out of the box”

This “closed” Apple ecosystem has one gigantic advantage – they control almost all aspects of device and system. They can create machines good in all aspects: it’s very stable, you do not have to think about any drivers, in most cases, you do not have to install any antivirus software (of course if you are not careful, MacOS will not save your data). Everything is built-in, the notebook has a brilliant, bright display, good speakers, nice microphones and in normal usage, is very quiet (with Docker it is very loud). It’s very difficult to find something similar on Windows – Microsoft Surface? Maybe, but according to Internet reviews, it’s far from ideal. Dell XPS series? They are quite expensive, very similar to Macbook so… the choice is yours. I decided back to Windows because of speed and right now, I’m very happy. 

WSL: fix Vmmem high CPU usage

Last time, when I wrote about using a trackpad instead of a mouse, I mentioned also about my migration from MacOS to Windows. Right now, I work with Windows and use WSL (Windows Subsystem for Linux) every day. It’s a great tool which I will describe in future posts, but today just want to share a quick info – sometimes when we use WSL we can observe high CPU usage by Vmmem process. Without a clear reason, without any CPU-intensive operations inside our WSL. In such case you can open a WSL terminal, run top or htop command and check CPU usage – probably it is caused by init process inside your Linux distribution.

The problem is known and Microsoft works to resolve it, but right now we have to use one of possible solutions. First one is just to restart WSL distribution – it will help and after reopen, Vmmem CPU usage should be normal. You can restart WSL using this simple command from PowerShell / Windows Terminal:

wsl --shutdown

Yes, a little strange, because it says “shutdown”, and in reality, it will restart your WSL. This should resolve issue, but it can happen again – many users, including me, report that high CPU usage occurs after system sleep. I had similar issues with Docker for Desktop on Mac, it slowed down dramatically after sleep and wake up. The only solution to fix that was restarting Docker.

With WSL we can use small trick – disable Windows Subsystem for Linux GUI. Many users report, that the issue is caused by that module, even if we do not use any GUI apps from Linux inside our WSL. I do not use them, so decided to disable WSLg. The thing is more interesting – WSLg is not available in current stable Windows 10 revision (19043), but probably already implemented and can cause some issues. To disable WSLg we have to edit .wslconfig file. In default configuration, it isn’t present, so you have to create it manually in: systemdrive:/Users/your-username/.wslconfig(yes, with the dot in the beginning). Then add to that file simple configuration:

[wsl2]
guiApplications=false

Save file and restart the computer or just restart WSL using command above. It should resolve that issue and after they will fix it in WSL core, it will be possible to unlock GUI apps. By the way, you can configure different WSL settings in that file like maximum memory usage limit – just check official Microsoft documentation.

Why and how use Laravel Resources

Laravel is a pretty nice PHP framework and provides a lot of useful features. One of them is Resource class. Very often we get some data from an ex. database and send it to our app client. Sending the whole model is very bad option and there are many reasons for that:

  • we probably do not want to disclose our models structure
  • some data may be confidential – maybe our client will not use that fields, but every user will be able to look at request, response and get that data
  • many clients (ex. mobile devices) do not need all the data

Without Resource class we have to create our own class and make some transformations, prepare data from a database for clients. If we decide to use a built-in Resource, it will be much, much simpler. We only have to return such class instances and pass our data into the constructor, and then write what fields we really want to use. There is simple example of Laravel Resource which will prepare our data to send as a JSON:

declare(strict_types=1);

namespace App\Modules\MyModule\Resources;

use Illuminate\Http\Resources\Json\Resource;

class MyModelResource extends Resource
{
   /**
    * @var MyModel
    */
   public $resource;

   public function toArray($request)
   {
      return [
       “id” => $this->resource->getKey(),
       “name” => $this->resource->my_model_name,
       “count” => $this->resource->my_model_attribute,
       (...)
      ];
   }

}

That comment with $var MyModel is optional, but helps some IDEs recognize what model will we use inside our Resource. And there is usage in our controller:

public function getModel(Request $request): JsonResponse
{
   // here some code for get model (or inject it with route)
   $model = MyModel::find($request->id);   

   return response()->json(
       new MyModelResource($model)
   );
}

Simple, clean and elegant, because we transform our model in a separate place. If it’s required, we can modify our data in a more complex way, no problem with that. We can also use… Resource inside Resource, so it’s ok to make something like that:

public function toArray($request)
{
    return [
        "id" => $this->resource->getKey(),
        "name" => $this->resource->my_model_name,
        "relation" => new MyModelRelationResource($this->resource->relation),
        (...)
  ];
}

Collection Resources

What if we have a lot of items and want to send them as a JSON? It’s also not a problem, because we can use a built-in ResourceCollection class to achieve that. There is another example:

declare(strict_types=1);

namespace App\Modules\MyModule\Resources;


use Illuminate\Http\Resources\Json\ResourceCollection;

class MyModuleCollectionResource extends ResourceCollection
{
   public function toArray($request)
   {
       return MyModuleResource::collection($this->collection);
   }
}

Of course we can modify or transform the collection in the same way as in a normal Resource, just a response array. Usage in controller:

public function getModels(Request $request): JsonResponse
{
   $models = MyModel::all();   

   return response()->json(
       new MyModuleCollectionResource($models)
   );
}

As you can see, it’s very easy to use, allows us to make everything more organized and solve a lot of issues.